In the first part of this series, we explored the programming jokes API and built a simple automation to extract the meaning of each joke. In this part, we'll automate the cultural-appropriateness check and email notifications using an LLM.
Developers prefer structured data because it's machine-readable and easy to automate. However, LLMs are primarily designed for conversational, natural language output. With the increasing use of LLMs in programming and automation, model providers have started prioritizing structured outputs for developers. For instance, starting with GPT-4, OpenAI has trained its models to follow user instructions more strictly.
For more details on how OpenAI improved programmer workflows in GPT-5, see my earlier blog: GPT-5 for Programmers.
We'll take advantage of this by instructing the LLM to respond in a structured JSON format. Since we're asking for the meaning of multiple jokes, it's best to separate the instructions for output structure from the actual jokes. The output instructions are generic, while the jokes vary each time. Mixing both in a single prompt would generate unique text combinations, reducing the effectiveness of the KV cache. Therefore, we'll place the output instructions in a special prompt know as system prompt and the jokes in the user prompt. Here's how we construct our system prompt,
automate_with_ai.py: SYSTEM_PROMPT
6061626364656667
SYSTEM_PROMPT=("You are an helpful assistant that explains a programmer joke and identify whether it is culturally appropriate to be shared in a professional office environment.\n""Goals:\n""(1) Decide whether the joke is funny or not (funny: true/false).\n""(2) Categorize the joke into one of these categories: 'Safe for work', 'Offensive', 'Dark humor'.\n""(3) And briefly explain the joke in 1 paragraph.\n""Your response must be a single JSON object with keys: funny (bool), category (string), explanation (string).\n")
As shown above, we delegate the task of determining whether a joke is funny and appropriate for the workplace to the LLM itself. Crucially, we instruct the LLM to return its output strictly in JSON format.
Then in our process_joke_file we make two modifications.
We have also created an external script, send_email.py (full code available at the end of this post). This script takes two arguments—the joke and its explanation—and queues an email in the outbox. The send_email function in our code is responsible for invoking this script.
Since the LLM now returns structured JSON output, we can easily inspect its response and, based on its assessment, call the send_email function directly from our code.
automate_with_ai.py process_joke_file
152153154155156157158159
result=_parse_final_json(response)ifresult['funny']andresult['category']=='Safe for work':# Send emailifsend_email(joke,result['explanation']):logger.info("Email sent for joke %s",file_id)else:logger.error("Failed to send email for joke %s",file_id)
In this post, we took a significant step forward by automating the evaluation of jokes for cultural appropriateness and streamlining the email sending process. By leveraging the LLM’s ability to return structured JSON, we eliminated the need for tedious manual checks and made it straightforward to plug the model’s output directly into our automation pipeline. This approach not only saves time but also reduces the risk of human error.
Yet, it’s important to recognize that what we’ve built so far is still traditional automation. The LLM serves as a smart evaluator, but all the decision-making logic and possible actions are hardcoded by us. The workflow is predictable and limited to the scenarios we’ve anticipated.
But what if the LLM could do more than just provide information? Imagine a system where the LLM can actively decide which actions to take, adapt to new situations, and orchestrate workflows on its own. This is the promise of agentic workflows—where the LLM becomes an autonomous agent, capable of selecting from a toolkit of actions and dynamically shaping the automation process.
In the next part of this series, we’ll dive into building such agentic systems. We’ll explore how to empower LLMs to not just inform, but to act—unlocking a new level of flexibility and intelligence in automation.
importosimportsysimportjsonimporttimeimportloggingimportdatetimeimportglobimportsignalimportrefrompathlibimportPathfromtypingimportDict,Any,Optionalfromdotenvimportload_dotenvload_dotenv()importrequestsOUTPUT_DIR=Path("/tmp/agent-001/")STATE_FILE=OUTPUT_DIR/"state.json"# Azure OpenAI settings - must be provided as environment variablesAZURE_ENDPOINT=os.environ.get("AZURE_OPENAI_ENDPOINT")AZURE_KEY=os.environ.get("AZURE_OPENAI_API_KEY")AZURE_DEPLOYMENT=os.environ.get("AZURE_OPENAI_DEPLOYMENT","gpt-4.1")API_VERSION=os.environ.get("AZURE_OPENAI_API_VERSION","2024-12-01-preview")# Ensure directories existOUTPUT_DIR.mkdir(parents=True,exist_ok=True)logging.basicConfig(level=logging.INFO,format="%(asctime)s%(levelname)s%(message)s")logger=logging.getLogger("agent")shutdown_requested=Falsedef_signal_handler(signum,frame):globalshutdown_requestedlogger.info("Signal %s received, will shut down gracefully",signum)shutdown_requested=Truesignal.signal(signal.SIGINT,_signal_handler)signal.signal(signal.SIGTERM,_signal_handler)defload_state()->Dict[str,Any]:ifSTATE_FILE.exists():try:returnjson.loads(STATE_FILE.read_text(encoding="utf-8"))exceptException:logger.exception("Failed to load state file, starting fresh")# default statereturn{"processed":{},"last_sent":{}}defsave_state(state:Dict[str,Any])->None:STATE_FILE.write_text(json.dumps(state,indent=2),encoding="utf-8")SYSTEM_PROMPT=("You are an helpful assistant that explains a programmer joke and identify whether it is culturally appropriate to be shared in a professional office environment.\n""Goals:\n""(1) Decide whether the joke is funny or not (funny: true/false).\n""(2) Categorize the joke into one of these categories: 'Safe for work', 'Offensive', 'Dark humor'.\n""(3) And briefly explain the joke in 1 paragraph.\n""Your response must be a single JSON object with keys: funny (bool), category (string), explanation (string).\n")def_extract_json(text:str)->Optional[dict]:"""Try to extract the first JSON object from a text blob."""try:returnjson.loads(text)exceptException:m=re.search(r"\{.*\}",text,re.S)ifm:try:returnjson.loads(m.group(0))exceptException:returnNonereturnNonedefchat_completion(messages,tools=None,temperature=0.0,max_tokens=800)->Dict[str,Any]:"""Call Azure OpenAI chat completion returning the full JSON, supporting tool (function) calls."""# Random jitter 3-5s to reduce rate spikestime.sleep(3+(2*os.urandom(1)[0]/255.0))ifnotAZURE_ENDPOINTornotAZURE_KEY:raiseRuntimeError("Azure OpenAI credentials (AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_KEY) not set")url=f"{AZURE_ENDPOINT}/openai/deployments/{AZURE_DEPLOYMENT}/chat/completions?api-version={API_VERSION}"headers={"Content-Type":"application/json","api-key":AZURE_KEY,}payload:Dict[str,Any]={"messages":messages,"temperature":temperature,"max_tokens":max_tokens,}iftools:payload["tools"]=toolspayload["tool_choice"]="auto"resp=requests.post(url,headers=headers,json=payload,timeout=90)resp.raise_for_status()returnresp.json()def_parse_final_json(content:str)->Optional[Dict[str,Any]]:obj=_extract_json(content)ifnotobj:returnNone# Minimal validationif{"safe","category","explanation"}.issubset(obj.keys()):returnobjreturnobj# return anyway; caller can decidedefsend_email(joke:str,explanation:str)->bool:group_email="all@example.com"cmd=[sys.executable,"send_email.py",group_email,joke,explanation]logger.info("Sending email to %s with joke",group_email)try:importsubprocessresult=subprocess.run(cmd,capture_output=True,text=True)ifresult.returncode!=0:logger.error("Failed to send email: %s",result.stderr)returnFalselogger.info("Email sent successfully")returnTrueexceptExceptionase:logger.exception("Exception while sending email: %s",e)returnFalsedefprocess_joke_file(path:Path,state:Dict[str,Any])->None:logger.info("Processing joke file: %s",path)joke=path.read_text(encoding="utf-8").strip()file_id=path.nameiffile_idinstate.get("processed",{}):logger.info("Already processed %s, skipping",file_id)returntry:messages=[{"role":"system","content":SYSTEM_PROMPT},{"role":"user","content":f"joke: `{joke}`"},]response=chat_completion(messages)["choices"][0]["message"]["content"]result=_parse_final_json(response)ifresult['funny']andresult['category']=='Safe for work':# Send emailifsend_email(joke,result['explanation']):logger.info("Email sent for joke %s",file_id)else:logger.error("Failed to send email for joke %s",file_id)exceptExceptionase:logger.exception("LLM tool-driven processing failed for %s\nException: %s",file_id,e)sys.exit(1)# Mark processedstate.setdefault("processed",{})[file_id]={"agent":"002","joke":joke,"processed_at":datetime.datetime.utcnow().isoformat(),"funny":result["funny"],"explanation":result["explanation"],"category":result["category"]}save_state(state)defmain_loop(poll_interval:int=60):state=load_state()logger.info("Agent started, watching %s",OUTPUT_DIR)whilenotshutdown_requested:txt_files=sorted(glob.glob(str(OUTPUT_DIR/"*.txt")))forfintxt_files:ifshutdown_requested:breakprocess_joke_file(Path(f),state)# Sleep and be responsive to shutdownfor_inrange(int(poll_interval)):ifshutdown_requested:breaktime.sleep(1)logger.info("Agent shutting down")if__name__=="__main__":main_loop()
#!/usr/bin/env python3importsysimportjsonimportloggingfrompathlibimportPathfromdatetimeimportdatetime,timezonelogging.basicConfig(level=logging.INFO,format="%(asctime)s%(levelname)s%(message)s")logger=logging.getLogger("send_email")OUTBOX=Path("/tmp/agent-001/outbox.json")OUTBOX.parent.mkdir(parents=True,exist_ok=True)defmain():iflen(sys.argv)<4:print("Usage: send_email.py <to_group> <quote> <explanation>")sys.exit(2)to_group=sys.argv[1]quote=sys.argv[2]explanation=sys.argv[3]# Append to outbox file as a recordrecord={"to":to_group,"quote":quote,"explanation":explanation,"ts":datetime.now(timezone.utc).isoformat()}ifOUTBOX.exists():arr=json.loads(OUTBOX.read_text(encoding="utf-8"))else:arr=[]arr.append(record)OUTBOX.write_text(json.dumps(arr,indent=2),encoding="utf-8")logger.info("Queued email to %s",to_group)if__name__=="__main__":main()