AI Projects #8 for Social Good Report
The eighth batch of AI Projects focused on the theme of Social Good! Starting on February 24, 22 participants developed 7 projects in 3 months. Those three months went by in the blink of an eye, filled with creativity, teamwork, and a shared passion for making a positive impact! On May 18, we hosted a showcase at Beykoz Kundura, where 45 AI Enthusiasts joined us to see the incredible projects focusing on various topics such as action recognition, computer vision, NLP, and LLM.
On February 24, participants proposed 17 projects and then teamed up around 7 of them. Teams and projects were finalized, and the roadmap was determined. Two weeks later, we met at Beykoz Kundura, where teams shared their first outputs, listened to each other’s presentations, and provided feedback. After that, we met three more times to discuss each other’s projects and brainstorm together, contributing ideas for understanding potential problems further and coming up with solution suggestions. On May 11, we organized an online Internal Presentation Day to prepare for the Showcase we would hold the following week.
On May 18, we held a Showcase with 45 AI enthusiasts, including our esteemed professors, professionals from the industry, and students. This showcase was special for us as it was the first face-to-face showcase we organized in Beykoz Kundura since the pandemic!
7 PROJECTS
Sign Language Translation for Turkish Sign Language
Sign language recognition from videos is crucial for effective communication with deaf people. In this study, we will adopt language modeling strategies to the sign language domain by quantization of hand and body landmarks on a spatial and temporal basis by utilizing graph convolutional networks.
Check-out the repository of this project to learn more!
Predicting protein-protein interactions from Foldseek sequence using NLP methods
Proper protein-protein interactions are vital to a healthy body. Any abnormality in protein interactions may lead to diseases. In this study, we addressed the challenge of predicting these interactions using structure-enhanced sequences of proteins (which contain 3D information in 1D format) to solve the protein-protein interaction problem. To do so, we utilized a variety of deep learning architectures ranging from 1D-CNNs to GCNs.
MediMate:Medical Chatbot
This project aims to develop an advanced, interactive AI chatbot designed to accurately comprehend and respond to inquiries from patients and healthcare staff. By integrating and fine-tuning state-of-the-art open-source language models such as LLama and Mistral, we have customized our chatbot to support the Turkish language, ensuring it delivers precise and useful answers. The chatbot is intended to enhance accessibility and provide valuable assistance to a diverse array of users.
Machine Unlearning in LLMs
Machine unlearning presents a sophisticated methodology within artificial intelligence, emphasizing selective forgetting mechanisms rather than the wholesale retraining of models. This approach is particularly significant in light of stringent privacy regulations such as the General Data Protection Regulation (GDPR) and the Right to Be Forgotten (RTBF). Instead of erasing entire datasets, AI systems can now implement targeted processes to modify or discard specific data points while retaining overall knowledge structures. By adopting this strategy, organizations can ensure compliance with privacy regulations without compromising the functionality or efficiency of their AI models. Such an approach not only conserves computational resources but also bolsters trust in AI systems by demonstrating a commitment to safeguarding individuals' privacy rights.
DeTraffic: Deep Reinforcement Learning to De-Traffic Our Lives
The average person spends 43 hours in traffic annually. This amount of time should and can be reduced to contribute to both society and individuals in the long run. DeTraffic is here to the rescue, it's a Multi-agent deep reinforcement learning model that controls traffic lights.
Entity State Tracking with Mamba SSM
In this project, we apply the Mamba model to several different entity state tracking datasets and benchmark its performance, comparing it with transformer-based architectures such as T5. Entity tracking is a high-level linguistic behavior, and achieving high accuracy in this task requires many additional capabilities. The importance of entity state tracking lies in understanding the context, maintaining conversation flow, personalizing responses, and facilitating complex conversation tasks that include entities, such as flight booking and scheduling appointments. Due to the nature of the problem, it requires long-context reasoning capabilities. Mamba, a recent model using state space model architecture as opposed to the Transformer, shows promise in tasks that require long-context reasoning and offers several other advantages, such as linear scaling.
Turkish Language Modeling with Tiny Models
Due to growing interest in AI, larger language models are commonly developed, yet smaller models can achieve similar results with greater efficiency. This study evaluates different architectures and their combination with tokenization strategies to optimize language modeling, specifically aiming to develop efficient, compact models for Turkish.
We are thrilled to have presented all 7 projects we embarked on during this 3-month journey.
Congratulations to all the teams for their hard work and dedication, and a huge thanks to our mentors and the AI Team for their support!
Thank you for your interest, see you in the next batch!
All participants have to abide by our CODE OF CONDUCT and LETTER OF CONSENT.
A BEV Foundation project inzva is a non-profit hacker community organizing study and project groups as well as camps in the fields of AI and Algorithm; and gathering CS students, academics, and professionals in Turkey.
Follow us on our social media accounts to have the most recent news about our upcoming events and programs!