AI Projects #2

We have concluded the second batch of our free research based AI Projects programme with a grand showcase held on May 11, 2019, which highlighted the outcomes of the projects, and presented the results of the work that the teams had been putting in over the course of 3 months.

AI Projects #2

We are happy to announce that every team had the opportunity to present their work, whether it is completed or in progress. Moreover, some of the teams decided to further their research by sending papers to workshops relevant to their selected projects. An example is the Fake Paper Generation team who are preparing to submit their work to the 3rd Workshop on Neural Generation and Translation (WNGT 2019), which is an ACL conference workshop.

As the teams came together for the last time during this batch to showcase their projects, the members of our community and supporting academics were also there to watch the teams as they walked us through the products of these past 3 months, and to give feedback to help the teams further improve their projects.

#2 Projects

FAKE PAPER GENERATION

Fake Paper Generation

Along with his team members, Samed Demir and Özgür Özdemir, AI Projects #2 lead Uras Mutlu started the showcase by presenting the results of their work on generating a fake academic paper, and did a real-time demo of how  computer-generated research papers can be produced just for fun, and of course, not for academic use.

Generating a structured document such as an academic paper is a challenging problem. There were no datasets available to train a generative model for academic papers. Therefore, the team created a small dataset from arXiv papers on computer vision. Experiments with basic RNNs and Transformer-XL model showed that it is possible to train a model that captures the dependencies in individual sections, however, it is still a challenge to create a whole academic paper from an abstract to a conclusion.

“We are inspired by the idea of utilizing Neural Networks to create structural documents, first questioned by Andrej Karpathy. The results of the baseline designed by Karpathy, in spite of being a simple RNN model, was 'magically' good and drove our ambition to use novel models in this domain. In order to learn longer dependencies, we experimented with Transformer variants to generate LaTeX formatted Academic Paper.”

GITHUB REPO

 

 

GAMEPLAY USING REINFORCEMENT LEARNING

59965382_2052672911695551_6744748922889043968_o.jpg

The second team, consisting of Hamdi Erkut, Ali Akay and Can Bulguoğlu took the stage to talk about another trendy topic; teaching an AI to play games by using reinforcement learning algorithms which learn how to behave through a positive - negative reward system.

Team members started with toy problems such as Taxi, Cart Pole and Pong games that have discrete action spaces on OpenAI’s gym environment. After successfully training AI programs to play these games, they moved onto continuous action space games: Mountain Car and Bipedal Walker. They tried different approaches but for the Bipedal Walker game, best results are achieved using genetic algorithms.

“Our project is an introduction to reinforcement learning using gym environment. We try to implement different algorithms to both discrete and continuous spaces.”

GITHUB REPO

 

 

TRAFFIC OPTIMIZATION USING MULTI-AGENT REINFORCEMENT LEARNING

Murat Akif Dumlu took the microphone on behalf of all his teammates to talk about their project, which proved to be harder than expected, just like the traffic jam in our own city, Istanbul.

The objective of the team was to apply multi-agent reinforcement learning techniques to the traffic optimization problem. The main challenge was that there were no multi-agent traffic environment on either OpenAI’s gym or any other open source project. A lot of work was required even to reach a starting point. Team tried their best and instead of experimental results, Murat Akif Dumlu gave an informative presentation on the subject to make everyone understand the available challenges and proposed solutions in the literature.

“With this project, we aim to find a near-optimal solution to routing optimizations where there is no central authority to distribute traffic, hence each agent is supposed to find its own route.”

GITHUB REPO

 

 

EARTHQUAKE PREDICTION

Earthquake Prediction.jpg

With the help of Giray Gökırmak, a more experienced member of our community, Boğaziçi University Computer Science and Engineering Sophomores Cemre Efe Karakaş and Eylül Yalçınkaya had decided to tackle a Kaggle Challenge focusing on finding a way to predict earthquakes before they happen. This project was one of the highlights of the day for many reasons; an important one being the fact that subject itself is very familiar since Istanbul suffered from a major earthquake back in 1999.

The problem was to predict the remaining time until an earthquake happens. Cemre and Eylül first identified the most relevant features of the data with respect to the remaining time to an earthquake. Then they used these features to train a predictor. They also tried more recent algorithms such as neural networks to solve the problem.

“Forecasting earthquakes is one of the most important problems in earth science, considering their devastating consequences. In this project, which is also a Kaggle competition, we address the issue of when an earthquake will take place. If the challenge is solved, it will have the potential to improve earthquake hazard assessments. Seismic data comes from a laboratory-run earthquake experiment and our model aims to predict the time remaining before an earthquake.”

GITHUB REPO

 

 

PREDICT FUTURE SALES

Predict Future sSales

Can Bulguoğlu had another project to showcase; a Kaggle Challenge dealing with predicting future sales.

This project involved dealing with time-series data. The objective was to predict the future sales based on previous transaction data. Can first performed an exploratory data analysis and then he identified the most relevant features that can help predict the sales. He then created low dimensional embeddings with the most relevant features. He used two approaches for learning with the feature embeddings as input: Light GBM and a neural network. He minimized the root mean squared error (RMSE) and he obtained the best results using Light GBM algorithm.

“In this project, we aim to improve our understanding of time series. Our implementation tries to benefit from neural network embeddings.”

GITHUB REPO

 

 

IMAGE RESTORATION

Image Restoration

Led by Ahmet Melek, who will be the guide of our Applied AI Study Group in July, image restoration team consisted of 3 other members - Onur Boyar, Burak Satar and Furkan Gürsoy -. This team brainstormed about an everyday problem: brightening dark photos to make sure we don’t miss the moment captured.

Ahmet Melek and his team first tried to reproduce the results of their reference paper “Learning to See in the Dark”. They wanted to extend the original paper by adding the functionality to brighten any photograph, taken by any smartphone. They experimented with different models and hyperparameter combinations and achieved competitive results in comparison with the reference paper.

“In this project; we take a picture photographed in a dark room, and generate a successful bright version of it. Previous techniques employed for this purpose can produce low quality results in the sense of noisiness. We use generative models to overcome this issue.

We reproduce and extend the results of "Learning to See in the Dark" project made by Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. We make various experiments with different hyperparameter and loss combinations. We also work on additional challenges on the side; such as inspecting runtime of the models and working with pictures that are photographed by different image sensors.”

GITHUB REPO

 

 

During the course of this batch, our participants improved their team work by coming together with like-minded AI enthusiasts for a self-directed learning journey, as well as enhancing their problem-solving skills by focusing on overcoming various obstacles that occur while implementing methods. Allowing the participants to experiment with different research interests; the programme was successful in fulfilling its main purpose, which is to bring together talented individuals with various educational and sectoral histories and a wide range of level of expertise to work on a topic of their desire.

Thanks to Microsoft for providing our teams with the necessary GPU while they enjoyed their processes.

Our AI Projects will be taking new applications for the third batch.

Application deadline is on August 08, 2019

Click here to apply.


Subscribe to our newsletter here and stay tuned for more hacker-driven activities.

inzva is supported by BEV Foundation, an education foundation for the digital native generation which aims to build communities that foster peer-learning and encourage mastery through one-to-one mentorship.