Intel’s Research Scientist Advocates and Hunts for Safety in AI

Along with her team, Ilke Demir aspires to find those responsibility pillars.

Ilke Demir , a senior staff research scientist at Intel Studios, has a rather offbeat wish. She wants the hype surrounding large language models to be as low as the hype around its ethics.

In an exclusive interview with AIM , she said, “Every day there’s new information that the training set includes copyrighted images or something that is claimed to be written by a large language model is actually not, or vice versa. Such things happen because the ethical aspects are left out of the process,” she said, commenting on the state of affairs. “I’m trying to find those responsibility pillars.”

This call for responsible technology is at odds with recent events in the industry. Two weeks ago, software giant Microsoft fired its entire ethics and society team. And, this is not something new. In 2020, Google fired its ethical AI team leader Tinmit Gebru. The internet giant has made several efforts to stabilise the department but the chaos still reigns supreme. A few months after Gebru’s exit, Hanna’s next manager, Meg Mitchell, was also shown the door.

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

In September 2022 , Meta disbanded its Responsible Innovation (RI) team; however, the company has been taking baby steps towards creating responsible services.

Intent > Technology


Download our Mobile App



‘Can we eliminate the impersonation aspect from deepfakes so that it forces the model to create someone that doesn’t exist at all, so they are not impersonating someone?’ – is one of the questions Demir and her team are trying to answer.

Deepfakes have been problematic historically and no concrete solution has been found. “Intent overpowers technology,” Demir said. A 2019 report showed that more than 95% of deepfakes are used for adult content. If it is already that extreme, then anything good someone tries to do in that space is actually a very small part, she added.

She suggested, first, the community should overcome the problem of detecting them so people can make better decisions about what to believe. Last year, Intel introduced a real-time deepfake detector that analyses ‘blood flow’ in video pixels to return results in milliseconds with 96% accuracy.

Stating the example of the Zelensky video telling Ukraine troops to surrender, Demir said, deepfakes target vulnerable populations. People in a war zone would see such a video without criticising the resolution, body posture, etc.

She suggested, an indicator that a certain percent of the video is fake should be a part of the industry module. Especially for immediate cases, such as elections, detection methods should be utilised and also in social media platforms for videos that go viral. “So that people can make their own decision, share it or leave it at their own risk. I think the more we push detection, the more the intent behind deepfakes will neutralise,” she said, hopeful.

In the longer term, Demir wants all the created content to be related with information about who created it, how it was created, which tool was used, what was the intent, and was it created with consent, etc. “We want to embed the data itself so that when people consume it, they will look at the data and identify if it is a trusted source,” she said.

Protecting privacy via AI

We have many social media pictures we don’t want to be in but people keep uploading them without our consent. Then there are automatic paste permission algorithms that associate your face with your name. There are crawlers taking your face everywhere. “Even if you untag yourself on a platform your name is not visible there, but your name is associated with your face,” explains Demir. “Our faces are like digital passports, we need to have control over them,” she added.

To stop this, Demir and her colleagues have developed a system called ‘ My Face My Choice ’. Elaborating on the method she said, “If you don’t want to appear in a photo, your face is wrapped with a quantifiable similar deepfake, and you don’t appear in the photo anymore. The photo looks very normal and natural, but you’re not there anymore.”

Lewis Griffin, professor at University College London opined , the tool is significant as it could have a much bigger positive impact on online privacy. However, there are several technical hurdles regarding security and storage to overcome before deploying on large networks. “Also, it is unclear whether there would be enough demand from social media users who want their face obscured to strangers,” he added.

Sign up for The AI Forum for India

Analytics India Magazine is excited to announce the launch of AI Forum for India – a community, created in association with NVIDIA, aimed at fostering collaboration and growth within the artificial intelligence (AI) industry in India.

Tasmia Ansari
Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.

Our Upcoming Events

Regular Passes expiring on Friday
27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023

21 Jul, 2023 | New York
MachineCon USA 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Council Post: Retail Business through Generative AI

Today, retail technology is developing at a fast pace – whether it is business transformation or even exploring emerging tech (AR/VR and metaverse etc.) to give customers a more experiential journey. Businesses are innovating not only to remain relevant, but also, ahead. Some are really shaping the future of omni-channel retail by predicting customer expectations and market trends.

Cerebras Wants What NVIDIA Has

While OpenAI apparently utilised 10,000 NVIDIA GPUs to train ChatGPT, Cerebras claims to have trained their models to the highest accuracy for a given compute budget.