| Course No. | Course Title | Prerequisite | Co-requisite | Credit Hours Lectures / Lab. |
|---|---|---|---|---|
| 606363 | Computing Systems for Data Science and Artificial Intelligence | 606362 | - | 3:3:0 Lab |
| Instructor Name | Office No. | Office ext. | Office Hours | |
|---|---|---|---|---|
| Abed Alkarim Banna | abanna@uop.edu.jo |
| Coordinator's Name: | Abed Alkarim Banna |
Upon successful completion of this course, students are expected to achieve the following learning outcomes:
| Course ILOs | Program ILOs | Teaching and Learning Method | Assessment Method |
|---|---|---|---|
| Knowledge (K) | |||
| K1 - Understanding of transformer architectures, LLMs, MoE, and their applications in NLP and AI systems. | DS7.2 | Interactive Lectures | Final Exam |
| K2 - Understanding of AI Agent architectures, the ReAct framework, multi-agent systems, and Large Action Models. | DS7.2 | Interactive Lectures | Final Exam |
| Intellectual (I) | |||
| I1 - Design AI agent workflows using LangGraph, incorporating reasoning, tool use, and multi-agent collaboration patterns. | DS2.1 | Interactive Lectures | Mid Exam |
| Practical (P) | |||
| P1 - Apply fine-tuned transformer models (BERT, GPT) for NER and text classification tasks, and build RAG-based applications using LangChain. | DS6.1 | Interactive Lectures | Rubric |
| P2 - Build AI agents using LangGraph with ReAct, implement multi-agent systems, and develop LAM-based applications. | DS6.1 | Interactive Lectures | Rubric |
| Transferable (T) | |||
| T1 - Develop logical and systematic thinking in designing, implementing, and deploying AI-powered applications and autonomous agent systems. | DS1.3 | Interactive Lectures | Mid Exam |
| Week | Topic | Topic Details | ILO | Reference | Project Deliverable |
|---|---|---|---|---|---|
| 1 | Transformer Architecture | The Encoder-Decoder Framework. Introduction to transformer architectures and their evolution. The Hugging Face Ecosystem: Models, Datasets, Tokenizers, Pipelines. | K1 | Ref 1 - Ch.1 + Ref 7 | Form teams, create GitHub repo, Transformer Analysis document |
| 2 | Text Classification | Text Classification with Transformers. Sentiment Analysis and multi-class classification. Feature extraction vs fine-tuning. Evaluation metrics. | K1, P1 | Ref 1 - Ch.2 | Classification Agent: categorize sources by type and relevance |
| 3 | Named Entity Recognition (NER) | Token Classification with BERT for NER. BIO/IOB Tagging. Multilingual NER. Fine-tuning BERT for custom NER tasks. | P1, T1 | Ref 1 - Ch.4 | NER Agent: extract entities (people, orgs, tech, dates) from sources |
| 4 | LangChain Fundamentals | Chains, Prompt Templates, Memory, Output Parsers. Document Loaders, Text Splitters, Vector Stores. RAG with LangChain. | K1, P1 | Ref 4 | Agent prompts, document loaders, ChromaDB vector store setup |
| 5 | LangGraph | Agent Orchestration as Graphs. Nodes, Edges, State, Conditional Routing. Stateful cyclical graphs. Checkpointing and Human-in-the-Loop. | I1, P2 | Ref 4 - LangGraph | LangGraph StateGraph skeleton with all agent nodes connected |
| 6 | Large Language Models (LLMs) | Modern LLMs: GPT-4, LLaMA, Gemini, Claude, Mistral, DeepSeek. Open vs Closed. Prompt Engineering. Local models with Ollama. | K1, K2 | Ref 3 + Lectures | LLM config per agent, benchmark 2+ models, token cost tracking |
| 7 | Mixture of Experts (MoE) | Sparse Activation, Router/Gating Network, Expert Networks, Top-K. Switch Transformer, Mixtral, DeepSeek-MoE. | K1, T1 | Ref 5 + Papers | CHECKPOINT 1: Demo NLP pipeline + MoE analysis report |
| 8 | Midterm Exam (Weeks 1-7) | ||||
| 9 | AI Agents & ReAct | AI Agent architecture (LLM + Tools + Memory + Planning). ReAct: Thought, Action, Observation. Tool Use. Agent Memory. | K2, P2 | Ref 2 + Ref 4 + Ref 6 | Research Agent with ReAct loop, Tavily search, web scraping |
| 10 | Multi-Agent Systems | Agentic Design Patterns. Workflow Patterns: Sequential, Parallel, Hierarchical. Frameworks: LangGraph, CrewAI, AutoGen. A2A protocols. | K2, I1, P2 | Ref 3 + Ref 4 + Ref 6 | Full pipeline: Analyzer + Writer + Critic agents, revision loop |
| 11 | Model Context Protocol (MCP) | MCP architecture: Hosts, Clients, Servers. Resources, Tools, Prompts, Sampling. Building MCP servers. | K2, P2 | Lectures + MCP Docs | CHECKPOINT 2: Full multi-agent pipeline + 2 MCP tool servers |
| 12 | Large Action Models (LAMs) | LLM to Agent to LAM. Examples: Rabbit R1, Anthropic Computer Use, OpenAI Operator. Function Calling, Browser Automation. | K2, P2, T1 | Ref 6 + Lectures | Browser Agent (LAM): web navigation, form filling, data extraction |
| 13 | Diffusion Models | Diffusion fundamentals. Forward/reverse process. Denoising architectures. Stable Diffusion, DALL-E. Practical generation. | K1, P1 | Ref 8 + Lectures | Illustration Agent (diffusion-based figures) + Streamlit UI |
| 14 | Project Development | Final development, end-to-end testing, bug fixes, documentation, instructor consultation. | P1, P2 | - | 5 test reports, README, technical docs, demo video, slides |
| 15 | Competition Day | Live head-to-head competition: 3 unseen topics, 30 min/topic. Judging panel scores. 10-min presentations. Peer review. | P1, P2, T1 | - | COMPETITION: Live run + presentation + peer review |
| 15-16 | Final Exam | K1, K2, I1, T1 | - | - | |
The course project follows a Competition-Based Learning approach. Teams of 2-3 students build a Multi-Agent Research Assistant that integrates all course topics. Teams compete head-to-head in a live evaluation event on Week 15. The project has 3 checkpoints throughout the semester to ensure steady progress.
Project Checkpoints
| When | Checkpoint | What to Demo | Type |
|---|---|---|---|
| Week 7 | Checkpoint 1: NLP Pipeline | Working Classification Agent + NER Agent + Research Agent with search. Transformer analysis document + MoE comparison report. | Instructor Feedback |
| Week 11 | Checkpoint 2: Full Pipeline + MCP | Complete multi-agent pipeline (Research -> Classify -> NER -> Analyze -> Write -> Critic) with MCP tool servers and revision loop. | Instructor + Peer Feedback |
| Week 15 | Competition Day | 3 unseen research topics revealed live. All teams run systems simultaneously (30 min/topic). Reports anonymized and scored by judging panel. 10-minute live presentation per team. | Graded (Judging Panel) |
Competition Scoring Rubric (per report, 100 points)
| Criteria | Points | Description |
|---|---|---|
| Report Quality | 20 | Coherence, depth, clarity, structure (intro, body, conclusion) |
| Source Coverage | 15 | Number and diversity of relevant sources (min 10), proper citations |
| Entity Extraction (NER) | 10 | Accuracy of extracted entities (people, organizations, technologies) |
| Source Classification | 10 | Correct categorization by type, domain, relevance |
| Illustrations | 10 | Relevance and quality of AI-generated figures (diffusion models) |
| Speed | 10 | Time to complete pipeline (faster = higher score, within 30-min limit) |
| Cost Efficiency | 5 | Total API token cost per report (lower = higher score) |
| Error Handling | 5 | Graceful handling of API errors, bad sources, timeouts |
| Innovation | 15 | Creative features: MoE routing, advanced MCP tools, LAM actions |
Competition Awards (Bonus Points)
| Award | Bonus | Criteria |
|---|---|---|
| 1st Place - Gold | +5 pts | Highest overall competition score |
| 2nd Place - Silver | +3 pts | Second highest score |
| 3rd Place - Bronze | +2 pts | Third highest score |
| Best Innovation | +2 pts | Most creative technical features (stackable) |
| Best Report Quality | +2 pts | Highest report quality score (stackable) |
| Fastest System | +1 pt | Fastest average pipeline completion time |
| Assessment Method | Grade | Comments |
|---|---|---|
| Project - Competition-Based Learning (30% total) | ||
| Weekly Deliverables | 10% | Weekly project milestones submitted on GitHub (see Project Deliverable column in schedule) |
| Competition Score | 8% | Average score across 3 unseen topics from judging panel (Week 15) |
| Architecture & Topic Coverage | 4% | All 12 course topics integrated: transformers, NER, MoE, MCP, LAMs, diffusion, etc. |
| Code Quality & Documentation | 3% | GitHub repo, README, technical report, demo video |
| Live Presentation | 3% | 10-minute demo + Q&A on Competition Day |
| Peer Review | 2% | Quality of structured feedback on 2 other teams' reports |
| Mid Exam | 30% | Covers Weeks 1-7 (Transformers, Text Classification, NER, LangChain, LangGraph, LLMs, MoE) |
| Final Exam | 40% | Comprehensive exam covering all course material |
| Total | 100% | |
Attendance Policy: University regulations apply to attendance.
Academic Honesty: Academic dishonesty is an unacceptable mode of conduct, and will not be tolerated in any form at University of Petra. All persons involved in academic dishonesty and plagiarism in any form will be disciplined in accordance with University rules and regulations.
Make-up Exams: Only students with valid excuses are allowed to have makeup exams. All excuses must be signed by the Faculty Dean. Student has the responsibility to arrange with his/her instructor for an exam date before the occurrence of the next regular exam.
All assignments must be submitted at the specified due date. Marks will be deducted from late work (no later than 3 days).
No makeup for quizzes under any circumstance.
| Name | Date | Signature | |
|---|---|---|---|
| Head of Department | |||
| Faculty Dean |