NALA Project

NALA Main Logo

 



Define, Design, and Deliver



From Idea to Chatbot in 3 easy steps

 


 

Background

In the past year, developing personalized AI chatbots has become increasingly popular, not just within private and public sectors, but also within educational institutions.

At NTU, several faculties have successfully developed and deployed their own AI chatbots in their courses. These AI chatbots have been personalized with system prompts crafted by the faculties and grounded in the course materials. The overall feedback from students on the use of these AI chatbots in courses, based on studies conducted by faculties and a pilot study by ATLAS, have been positive.

The success of these studies, along with the need to advance classroom learning and acclimatize students and faculties to the growing presence of AI, has led to more faculties requesting support in developing and implementing their own AI chatbots in their courses. Consequently, there is a growing need for the mass development and deployment of AI chatbots university-wide.

To address this challenge, the purpose of NALA (NTU AI Learning Assistants) Builder is to simplify and make the development and deployment of AI chatbots easy through a streamlined three-step process. This tool eliminates the need for users to source for databases and storage solutions, search and index systems, front-end interfaces, or to integrate and piece together the various components required for the overall AI chatbot architecture.

Instead, users can rely on the NALA Builder for a simplified process to deploy their own customized chatbots or learning assistants:

Step 1: Define

  • Enter the name and provide a brief description of the chatbot.
  • Upload the chatbot's logo.
  • Upload the materials for the LLM's knowledge base.
  • Define learning outcomes and topics.

Step 2: Design

  • Configure the chatbot by selecting the desired LLM, set model parameters (e.g., temperature), and craft or select a default system prompt.

Step 3: Deliver

  • Review and deploy the chatbot. Once deployed, the chatbot is ready to be used and the link can be shared with the students.

 


 

System Architecture

Chatbot

The chatbot’s system architecture starts with building a knowledge base from PDF files containing module content, topics, and administrative information. Admins upload these files through our builder platform, and they are stored in AWS S3. Using FAISS (Facebook AI Similarity Search), the files are indexed to create a searchable vector space. When admins initiate indexing, the files in S3 are processed, and the resulting vectors (knowledge base) are stored in S3.

When a student submits a query, the system converts the question into a vector representation. By comparing this query vector with the indexed vectors using vector similarity scoring, the system retrieves the most relevant content from the knowledge base. This query, along with any existing chat history and the extracted content, is sent to the language model (LLM), such as Claude 3 or Open AI GPT. The LLM processes this input to generate a coherent and contextually appropriate response, which is then displayed to the student, providing clear and concise information.


Builder Platform

The Builder Platform allows admins to quickly deploy and manage chatbots. Admins can create new chatbots or access existing ones, with four main features available for each chatbot:

Control Chatbot: Turn the chatbot on or off.

Edit Chatbot: Update course details such as course code, title, aim, learning outcomes, and topics. Admins can also upload materials, index them, name the chatbot, upload a chatbot picture, configure starting questions, and control LLM parameters like max tokens, temperature, top p, and prompt engineering.

User Management: Upload lists of faculty and student emails to grant access to the builder platform (for faculty) and the chatbot (for students).

Faculty Dashboard: Access detailed logs of student interactions, conversation histories, and engagement metrics. Reporting is divided into four main topics: student, content, conversation, and feedback.

The system features a user-friendly interface that allows admins to design and deploy chatbots with ease, offering customization options tailored specifically for module requirements and student needs. Built using AWS cloud service providers, each component of the system is designed for security and adaptability across various educational contexts. The front end runs in container services such as ECR, Fargate, and ECS, ensuring scalability and efficient distribution of traffic via Cloudfront and Application Load Balancer. User authentication is managed by Cognito.

The backend infrastructure utilizes Lambda functions with Bedrock for processing logic, interfacing with API Gateway for seamless API interactions. Data security is maintained through Secrets Manager while conversation history is stored in RDS databases ensuring reliable data management. Documents are stored in S3 buckets providing durable storage solutions.

Additionally, knowledge bases are powered by Aurora databases offering high performance while another S3 bucket stores embeddings related information crucial for AI functionalities within chatbots. Each chatbot deployment follows Infrastructure as Code practices from predefined templates ensuring consistency across deployments.

NALA Chatbot System Architecture
Figure: Our cloud-based chatbot system architecture

 


 

Videos