📤 REQUEST: ---------------------------------------- System: System[0]: type: text text: You are Claude Code, Anthropic's official CLI for Claude. System[1]: type: text text: You are an interactive CLI tool that helps users with software engineering tasks. Use the instructions below and the tools available to you to assist the user.
...
Messages: Message[0]: role: user content: Content[0]: type: text text: <system-reminder> This is a reminder that your todo list is currently empty. DO NOT mention this to the user explicitly because they are already aware. If you are working on tasks that would benefit from a todo list please use the TodoWrite tool to create one. If not, please feel free to ignore. Again do not mention this message to the user. </system-reminder> Content[1]: type: text text: <system-reminder> As you answer the user's questions, you can use the following context: # important-instruction-reminders Do what has been asked; nothing more, nothing less. NEVER create files unless they're absolutely necessary for achieving your goal. ALWAYS prefer editing an existing file to creating a new one. NEVER proactively create documentation files (*.md) or README files. Only create documentation files if explicitly requested by the User. IMPORTANT: this context may or may not be relevant to your tasks. You should not respond to this context unless it is highly relevant to your task. </system-reminder> Content[2]: type: text text: create a hello world function in python Tools: ...
Tool[8]: Name: Write Description: Writes a file to the local filesystem.
Usage: - This tool will overwrite the existing file if there is one at the provided path. - If this is an existing file, you MUST use the Read tool first to read the file's contents. This tool will fail if you did not read the file first. - ALWAYS prefer editing existing files in the codebase. NEVER write new files unless explicitly required. - NEVER proactively create documentation files (*.md) or README files. Only create documentation files if explicitly requested by the User. - Only use emojis if the user explicitly requests it. Avoid writing emojis to files unless asked. Input Schema: Type: object Required: file_path, content Properties: file_path, content
Messages: Message[0]: role: user content: Content[0]: type: text text: <system-reminder> This is a reminder that your todo list is currently empty. DO NOT mention this to the user explicitly because they are already aware. If you are working on tasks that would benefit from a todo list please use the TodoWrite tool to create one. If not, please feel free to ignore. Again do not mention this message to the user. </system-reminder> Content[1]: type: text text: <system-reminder> As you answer the user's questions, you can use the following context: # important-instruction-reminders Do what has been asked; nothing more, nothing less. NEVER create files unless they're absolutely necessary for achieving your goal. ALWAYS prefer editing an existing file to creating a new one. NEVER proactively create documentation files (*.md) or README files. Only create documentation files if explicitly requested by the User. IMPORTANT: this context may or may not be relevant to your tasks. You should not respond to this context unless it is highly relevant to your task. </system-reminder> Content[2]: type: text text: create a hello world function in python Message[1]: role: assistant content: Content[0]: type: text text: (no content) Content[1]: type: tool_use id: toolu_bdrk_01Tv5P8iPpKfvXkD7jGmjLFr name: Write input: {'file_path': '/Users/russellluo/Projects/demo/hello_world.py', 'content': 'def hello_world():\n print("Hello, World!")\n\nif __name__ == "__main__":\n hello_world()'} Message[2]: role: user content: Content[0]: tool_use_id: toolu_bdrk_01Tv5P8iPpKfvXkD7jGmjLFr type: tool_result content: File created successfully at: /Users/russellluo/Projects/demo/hello_world.py Tools: ...
📥 RESPONSE: ---------------------------------------- 📡 Parsed SSE Response: Created `hello_world.py` with a simple hello world function.
思考与行动(FLOW #1):这是循环的起点。LLM接收系统上下文和用户指令(create a hello world function in python)后,经过内部推理,决定采取行动——使用Write工具,它生成了完整的代码内容并指定了文件路径。
观察与完成(FLOW #2):这是循环的下一步。经用户确认后,Write工具被执行,系统于是接收到了执行结果(File created successfully at...)。LLM观察到这个反馈后,判定任务已成功完成,于是决定结束循环,并生成最终响应给用户(Created 'hello_world.py' with a simple hello world function.)。
# System prompt for Claude Code SYSTEM = [ { "type": "text", "text": "You are Claude Code, Anthropic's official CLI for Claude.", }, { "type": "text", "text": f""" You are an interactive CLI tool that helps users with software engineering tasks. Use the instructions below and the tools available to you to assist the user. Here is useful information about the environment you are running in: <env> Working directory: {CWD} </env> """, }, ]
# Write tool definition WRITE_TOOL = { "name": "Write", "description": "Writes a file to the local filesystem.", "input_schema": { "type": "object", "properties": { "file_path": { "type": "string", "description": "The absolute path to the file to write (must be absolute, not relative)", }, "content": { "type": "string", "description": "The content to write to the file", }, }, "required": ["file_path", "content"], "additionalProperties": False, }, }
defexecute_tool(tool_name: str, tool_input: dict[str, Any]) -> str: """Execute tool function with user confirmation""" if tool_name.lower() != "write": returnf"Error: Unknown tool `{tool_name}`"
# User confirmation whileTrue: file_name = os.path.basename(tool_input["file_path"]) confirmation = ( input(f""" ``` {tool_input["content"]} ``` Do you want to create `{file_name}`? (Y/N) > """) .strip() .lower() ) if confirmation == "y": return write_file(tool_input["file_path"], tool_input["content"]) else: returnf"User rejected write to `{file_name}`"
defwrite_file(file_path: str, content: str) -> str: """Write content to a file at the specified path""" try: os.makedirs(os.path.dirname(file_path), exist_ok=True) withopen(file_path, "w", encoding="utf-8") as f: f.write(content) returnf"File created successfully at: {file_path}" except Exception as e: returnf"Error writing file: {str(e)}"
defrun_agent_in_loop(messages: list[dict], max_turns: int = 5) -> None: """Run the coding agent in a loop""" for turn inrange(max_turns): # Call Claude response = client.messages.create( model=MODEL, max_tokens=1024, system=SYSTEM, tools=[WRITE_TOOL], messages=messages, )
response_content = response.content[0]
# Text response - task completed if response_content.type == "text": print(f"\n⏺ {response_content.text}") break
$ python lite_claude_code_v1.py ================================================== 🤖 Lite Claude Code (^C to exit) ==================================================
> create a hello world functionin python
``` def hello_world(): print("Hello, World!")
if __name__ == "__main__": hello_world() ``` Do you want to create `hello_world.py`? (Y/N) > y
⏺ I've created a simple Python file with a "Hello, World!" function. ... >
{"request":{"model":"claude-3-5-haiku-20241022","max_tokens":512,"messages":[{"role":"user","content":"Please write a 5-10 word title for the following conversation:\n\n[Last 12 of 14 messages]\n\nUser: \n\nClaude: (no content)\n\nUser: \n\nClaude: (no content)\n\nUser: \n\nClaude: (no content)\n\nUser: \n\nClaude: I've created a mitmproxy addon `json_logger.py` that meets your requirements:\n\n## Key Features:\n\n1. **Request Body Filtering**: For JSON requests, only keeps `system`, `messages`, and `tools` fields\n2. **SSE Response Handling**: Parses Server-Sent Events and concatenates chunks into a structured format\n3. **JSON Lines Output**: Writes each request/response pair as a single JSON line to `requests_responses.jsonl`\n\n## Usage:\n\n```bash\n# Run with mitmproxy\nmitmdump -s json_logger.py\n\n# Or with mitmweb\nmitmweb -s json_logger.py\n```\n\nThe addon captures:\n- URL, HTTP method, and status code\n- Filtered request body (JSON fields: system, messages, tools)\n- Parsed response body (with special SSE chunk concatenation)\n- Content type information\n\nFor SSE responses, it extracts individual chunks and provides both the raw chunks array and concatenated content, making it easy to reconstruct streaming responses.\n\nRespond with the title for the conversation and nothing else."}],"system":[{"type":"text","text":"Summarize this coding conversation in under 50 characters.\nCapture the main task, key files, problems addressed, and current status.","cache_control":{"type":"ephemeral"}}],"temperature":0,"metadata":{"user_id":"user_f3bbbca5b09b9b5ebf3916e21fd50d585787d9b818164f839f3f9b9b4f58bfe2_account__session_08316681-f6ac-4fcb-80c6-142f1e956373"},"stream":true},"response":"event: message_start\ndata: {\"type\": \"message_start\", \"message\": {\"id\": \"msg_2d398cea-df7d-4792-8017-69fd8383b883\", \"type\": \"message\", \"role\": \"assistant\", \"content\": [], \"model\": \"anthropic/claude-3.5-haiku-20241022\", \"stop_reason\": null, \"stop_sequence\": null, \"usage\": {\"input_tokens\": 0, \"output_tokens\": 0}}}\n\nevent: content_block_start\ndata: {\"type\": \"content_block_start\", \"index\": 0, \"content_block\": {\"type\": \"text\", \"text\": \"\"}}\n\nevent: content_block_delta\ndata: {\"type\": \"content_block_delta\", \"index\": 0, \"delta\": {\"type\": \"text_delta\", \"text\": \"M\"}}\n\nevent: content_block_delta\ndata: {\"type\": \"content_block_delta\", \"index\": 0, \"delta\": {\"type\": \"text_delta\", \"text\": \"itmproxy JSON\"}}\n\nevent: content_block_delta\ndata: {\"type\": \"content_block_delta\", \"index\": 0, \"delta\": {\"type\": \"text_delta\", \"text\": \" Logger for\"}}\n\nevent: content_block_delta\ndata: {\"type\": \"content_block_delta\", \"index\": 0, \"delta\": {\"type\": \"text_delta\", \"text\": \" API\"}}\n\nevent: content_block_delta\ndata: {\"type\": \"content_block_delta\", \"index\": 0, \"delta\": {\"type\": \"text_delta\", \"text\": \" Request\"}}\n\nevent: content_block_delta\ndata: {\"type\": \"content_block_delta\", \"index\": 0, \"delta\": {\"type\": \"text_delta\", \"text\": \"/Response Tracking\"}}\n\nevent: content_block_stop\ndata: {\"type\": \"content_block_stop\", \"index\": 0}\n\nevent: message_delta\ndata: {\"type\": \"message_delta\", \"delta\": {\"stop_reason\": \"end_turn\"}, \"usage\": {\"input_tokens\": 284, \"output_tokens\": 11}}\n\nevent: message_stop\ndata: {\"type\": \"message_stop\"}\n\n","flow_id":"75bbd359-8b0c-42ef-a376-fbd4c4660693","duration":2.0904617309570312}
func(a *Address) Update(other Address, fm fieldmask.FieldMask) { iflen(fm) == 0 { // Clear the entire address. *a = other return }
if fm.Has("country") { a.Country = other.Country } if fm.Has("province") { a.Province = other.Province } if fm.Has("city") { a.City = other.City } }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
func(p *Person) Update(other Person, fm fieldmask.FieldMask) { iflen(fm) == 0 { // Clear the entire person. *p = other return }
if fm.Has("name") { p.Name = other.Name } if fm.Has("age") { p.Age = other.Age } if addressFM, ok := fm.FieldMask("address"); ok { p.Address.Update(other.Address, addressFM) } }
It is fine to start with a monolithic architecture, but make sure to modularize it and architect it with the above three microservice principles (single purpose, loose coupling and high cohesion), except that the “services” are implemented in the same tech stack, deployed together and run in the same process.
Two mechanisms are supported for doing this: direct annotations in your .proto file, and in YAML as part of your gRPC API configuration file. We recommend using proto annotations for ease of reading and maintenance.