import os import gradio as gr import requests import pandas as pd from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel import logging # Настройка логирования logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) # --- Константы --- DEFAULT_API_URL = "https://agents-course-unit4-scoring.hf.space" # --- Internet-Enabled Agent Definition --- class InternetAgent: def __init__(self): print("🌐 InternetAgent initializing with web search capabilities...") try: # Используем модель от Hugging Face и инструмент поиска self.model = HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct") self.search_tool = DuckDuckGoSearchTool() # Создаем агента с доступом к поиску self.agent = CodeAgent( tools=[self.search_tool], model=self.model, max_steps=6, # Ограничиваем шаги для скорости add_base_tools=False # Используем только наши инструменты ) print("✅ InternetAgent initialized successfully with web search") except Exception as e: print(f"❌ Error initializing InternetAgent: {e}") self.agent = None # Резервная база знаний на случай ошибки self.fallback_knowledge = { "capital of france": "Paris", "capital of germany": "Berlin", "capital of uk": "London", "capital of usa": "Washington D.C.", "2+2": "4", "largest planet": "Jupiter", } def __call__(self, question: str) -> str: print(f"🤖 Processing: {question}") if not self.agent: # Используем резервную базу знаний если агент не инициализирован question_lower = question.lower() for key, answer in self.fallback_knowledge.items(): if key in question_lower: return answer return "I need internet access to answer this question properly." try: # Создаем оптимизированный промпт для лучших результатов optimized_prompt = f""" Please provide a clear, concise, and accurate answer to the following question. If you need to search for information, use the search tool. Keep your answer brief and to the point. Question: {question} Answer: """ # Запускаем агента response = self.agent.run(optimized_prompt) # Очищаем ответ clean_response = self.clean_response(response) print(f"✅ Answer: {clean_response[:100]}...") return clean_response except Exception as e: print(f"❌ Error in agent execution: {e}") return f"I encountered an error while searching for the answer: {str(e)}" def clean_response(self, response: str) -> str: """Очищает ответ от лишней информации""" # Удаляем мета-комментарии агента lines = response.split('\n') clean_lines = [] for line in lines: # Пропускаем строки с инструментами или процессами if any(term in line.lower() for term in ['tool:', 'searching', 'step', 'using tool']): continue # Пропускаем пустые строки в начале if not clean_lines and not line.strip(): continue clean_lines.append(line) clean_response = '\n'.join(clean_lines).strip() # Если ответ слишком длинный, берем первую часть if len(clean_response) > 500: clean_response = clean_response[:497] + "..." return clean_response if clean_response else "I couldn't find a clear answer to that question." # --- Упрощенная версия для тестирования --- class LiteInternetAgent: def __init__(self): print("🌐 LiteInternetAgent initializing...") self.search_tool = DuckDuckGoSearchTool() def __call__(self, question: str) -> str: try: # Прямой поиск через инструмент result = self.search_tool(question) return f"According to web search: {result[:300]}..." if len(result) > 300 else result except Exception as e: return f"Search failed: {str(e)}" # --- Основная функция --- def run_and_submit_all(profile: gr.OAuthProfile | None): """ Fetches all questions, runs the InternetAgent on them, submits all answers, and displays the results. """ space_id = os.getenv("SPACE_ID") if profile: username = f"{profile.username}" print(f"User logged in: {username}") else: print("User not logged in.") return "Please Login to Hugging Face with the button.", None api_url = DEFAULT_API_URL questions_url = f"{api_url}/questions" submit_url = f"{api_url}/submit" # 1. Instantiate our Internet Agent try: # Пробуем полную версию, если не работает - упрощенную agent = InternetAgent() if agent.agent is None: agent = LiteInternetAgent() print("✅ InternetAgent created successfully") except Exception as e: print(f"Error instantiating agent: {e}") return f"Error initializing agent: {e}", None agent_code = f"https://huggingface.co/spaces/{space_id}/tree/main" print(f"Agent code URL: {agent_code}") # 2. Fetch Questions print(f"Fetching questions from: {questions_url}") try: response = requests.get(questions_url, timeout=30) response.raise_for_status() questions_data = response.json() if not questions_data: print("Fetched questions list is empty.") return "Fetched questions list is empty or invalid format.", None print(f"✅ Fetched {len(questions_data)} questions.") except Exception as e: error_msg = f"❌ Error fetching questions: {e}" print(error_msg) # Демо-режим с разными типами вопросов demo_questions = [ {"task_id": "demo1", "question": "What is the capital of France?"}, {"task_id": "demo2", "question": "What is the current weather in Tokyo?"}, {"task_id": "demo3", "question": "Who won the Nobel Prize in Physics in 2023?"}, {"task_id": "demo4", "question": "What is the population of Brazil?"}, {"task_id": "demo5", "question": "Explain quantum computing in simple terms"}, ] questions_data = demo_questions print("🚨 Using demo questions since API is unavailable") # 3. Run your Agent results_log = [] answers_payload = [] print(f"Running agent on {len(questions_data)} questions...") for i, item in enumerate(questions_data): task_id = item.get("task_id") question_text = item.get("question") if not task_id or question_text is None: print(f"Skipping item with missing task_id or question: {item}") continue print(f"🔍 Processing question {i+1}/{len(questions_data)}: {question_text[:50]}...") try: submitted_answer = agent(question_text) answers_payload.append({"task_id": task_id, "submitted_answer": submitted_answer}) results_log.append({"Task ID": task_id, "Question": question_text, "Submitted Answer": submitted_answer}) except Exception as e: print(f"❌ Error running agent on task {task_id}: {e}") results_log.append({"Task ID": task_id, "Question": question_text, "Submitted Answer": f"AGENT ERROR: {e}"}) if not answers_payload: print("Agent did not produce any answers to submit.") return "Agent did not produce any answers to submit.", pd.DataFrame(results_log) # 4. Prepare Submission submission_data = { "username": username.strip(), "agent_code": agent_code, "answers": answers_payload } status_update = f"✅ Agent finished. Processed {len(answers_payload)} answers for user '{username}'" print(status_update) # 5. Submit answers (только если вопросы реальные, не демо) if "demo" not in str(questions_data[0].get("task_id", "")): print(f"Submitting {len(answers_payload)} answers to: {submit_url}") try: response = requests.post(submit_url, json=submission_data, timeout=120) response.raise_for_status() result_data = response.json() final_status = ( f"🎉 Submission Successful!\n" f"👤 User: {result_data.get('username')}\n" f"📊 Overall Score: {result_data.get('score', 'N/A')}% " f"({result_data.get('correct_count', '?')}/{result_data.get('total_attempted', '?')} correct)\n" f"💬 Message: {result_data.get('message', 'No message received.')}" ) print("✅ Submission successful.") results_df = pd.DataFrame(results_log) return final_status, results_df except Exception as e: error_message = f"❌ Submission Failed: {str(e)}" print(error_message) results_df = pd.DataFrame(results_log) return error_message, results_df else: # Демо-режим: показываем ответы но не отправляем demo_status = ( f"🧪 DEMO MODE (API Unavailable)\n" f"👤 User: {username}\n" f"📊 Processed: {len(answers_payload)} demo questions\n" f"🌐 Agent used web search for answers\n" f"💬 Real submission disabled - API not accessible\n\n" f"Check the web-powered answers below!" ) print("✅ Demo completed - showing results without submission") results_df = pd.DataFrame(results_log) return demo_status, results_df # --- Gradio Interface --- with gr.Blocks(title="Internet-Enabled AI Agent") as demo: gr.Markdown(""" # 🌐 Internet-Enabled AI Agent **Powered by web search and large language models** ### 🔧 Capabilities: - **Web Search**: Real-time information from DuckDuckGo - **LLM Power**: Qwen2.5-32B model for understanding - **Multi-step Reasoning**: Complex question answering ### 📚 Example questions: - *"Current weather in any city"* - *"Latest news headlines"* - *"Historical facts and data"* - *"Scientific explanations"* - *"Complex calculations"* """) gr.Markdown(""" ### ⚠️ Important Notes: - This agent requires internet access - Responses may take longer due to web searches - Some questions might not have clear online answers """) with gr.Row(): with gr.Column(): gr.LoginButton() run_button = gr.Button("🚀 Run Evaluation", variant="primary") with gr.Row(): with gr.Column(): status_output = gr.Textbox( label="Status", lines=4, interactive=False ) with gr.Column(): results_table = gr.DataFrame( label="Questions & Answers", wrap=True ) run_button.click( fn=run_and_submit_all, outputs=[status_output, results_table] ) if __name__ == "__main__": print("🚀 Starting Internet-Enabled AI Agent...") demo.launch(debug=True, share=False)