In-depth | Tech

Artificial intelligence is coming for us

By - 07.02.2023

Spoiler alert: it’s exciting (and somewhat terrifying).

Imagine that in the year 2100, you wear glasses that project a virtual assistant, an Artificial Intelligence (AI), that understands you perfectly and anticipates your every need. As you wake up in the morning, your AI assistant reminds you of your schedule and helps you plan your day. You go to school, where you are presented with virtual lectures, and personalized assignments that are tailored to your learning style and abilities. The AI assistant is your tutor, helping you understand complex concepts, and providing you with immediate feedback.

You finish your day at school and head to work, where you are greeted by an office that adapts to your preferences, and an AI-powered assistant that helps you with your tasks. Your assistant can analyze large amounts of data, communicate with other team members, and even generate reports and presentations for you.

After work, you head back home, where your AI-powered home welcomes you with the perfect temperature and lighting and prepares your dinner based on your dietary preferences. You spend your evening catching up with loved ones and enjoying immersive entertainment tailored to your taste.

Welcome to the world of AI, where the future is not set, but crafted.

This is a world where AI has been fully integrated into our daily lives, making every aspect of it better and more efficient. But this utopia for now is just a figment of imagination.

Achieving the level of AI integration and personalization described in this utopian scenario would require a significant amount of advanced technology and development.

Firstly, it would require significant advancements in natural language processing (NLP) and machine learning, in order for AI assistants to understand and respond to human speech and anticipate needs. It would also require advancements in computer vision and sensor technology, in order to make the virtual assistant function smoothly and provide feedback seamlessly.

For personalization, it would require a large amount of data, as well as the ability to analyze and process it. The data would need to be collected and stored in a secure manner, and AI models would need to be trained on this data to create personalized experiences.

One way AI could train on personal data would be through the use of user-provided data, such as through an app or portal where users can input information about themselves and their preferences. Additionally, AI could also use data collected from sensors and devices that are integrated with the AI system, such as smart home devices and wearables, to learn more about a person’s behavior and habits.

Are we close to this utopian scenario? As of today, the development of AI technology is still in its early stages and it is not clear how quickly the technology will advance or what new breakthroughs may be made. 

However, it’s important to note that the advancements in AI research and implementation is happening at a fast pace. The rapid improvements in areas like machine learning, computer vision, and natural language processing have already led to significant advancements in AI systems — which already leaves people in awe.

Entering into AI

In recent years, the world has been captivated by the arrival of advanced AI systems such as ChatGPT — an AI-powered language model. Many people have been amazed by the capabilities of these systems, which are able to write essays, code, conduct research and even solve complex math problems. 

However, there are many AI-assisted products that are already in use today that are integrated into our everyday lives and many people may not think of them as “AI.” Some examples include AI algorithms that are used by many websites and apps to recommend products, movies, music, and other items based on users’ browsing and search history. These systems use machine learning to predict which items a user may be interested in. 

Then there are spam filters. Email providers use AI-powered spam filters to automatically identify and block unwanted messages. The most familiar ones are social media algorithms where platforms like Facebook, Instagram and Twitter use AI algorithms to rank and show content on users’ feeds by ranking and filtering posts based on relevance, timeliness, and popularity. 

However, let’s go back to the basics. What’s AI and why all the fuss about it?

Artificial Intelligence is a field of computer science and engineering that is dedicated to creating machines and software that can perform tasks that typically require human intelligence. These tasks include understanding natural language, recognizing objects and images, making decisions and even learning from experience.

The origins of AI can be traced back to the 1950s, when researchers first began experimenting with ways to make computers more intelligent. However, it wasn’t until the late 20th century that significant advancements were made in the field, thanks to the development of more powerful computers and the availability of large amounts of data.

One of the first AI products ever introduced was the IBM 704, which was a computer system that was capable of playing chess. Developed in the mid-1950s, the IBM 704 was able to analyze chess positions and make decisions based on its understanding of the game. This was a significant breakthrough, as it demonstrated that computers were capable of mimicking human intelligence in specific domains. 

However, IBM 704 was not able to beat human chess grandmasters at the time. It actually took several decades for a computer to achieve this milestone. It was on February 10, 1996, when IBM’s Deep Blue computer defeated world chess champion Garry Kasparov in a six-game match, becoming the first computer to beat a reigning world champion in a match under standard chess tournament time controls. 

Since then, AI technology has advanced significantly and has been applied to a wide range of industries and applications.

The latest fascination is ChatGPT. ChatGPT is an AI-powered language model developed by OpenAI. It is a type of machine learning model that can understand and generate human language. ChatGPT can be used for a variety of natural language processing tasks such as language translation, text summarization, question answering and even writing essays, stories and code. It is trained on a massive dataset of text, which allows it to generate text that is similar to human-written text.

Since the release of ChatGPT in November 2022 for mass-testing, it has been used by a large number of people, including researchers, developers, and data scientists. ChatGPT amassed 1 million users in just five days after launching. 

With the increasing use of AI in various industries, some concerns have been raised. 

One of the biggest concerns about AI is that it could replace human workers, potentially leading to widespread job loss and economic disruption.

Another concern is about bias and fairness. As AI systems are only as good as the data they are trained on, this may lead to AI systems replicating or even amplifying the biases present in the training data. Some people are concerned about the possibility of AI systems becoming so advanced that they could surpass human intelligence and autonomy, leading to potential ethical and societal issues.

One serious concern is that AI is being used in more and more security applications like surveillance and cyber security. Some people are concerned about the use of AI in this context, which could lead to violations of privacy and civil liberties. Other people are concerned about the lack of transparency of the decision-making process of some AI systems.

The concerns about AI are certainly grounded in the sense that they are based on real potential risks and challenges associated with the development and use of AI. However, it’s important to note that the actual extent of these risks and challenges is still uncertain and the field of AI is rapidly evolving.

For example, the concern about bias and fairness in AI systems is certainly real, as there have been instances where AI systems have replicated or amplified bias present in the training data. But also, researchers and practitioners are working on solutions to mitigate these issues, such as creating fairer algorithms and fair representation learning.

AI in the media

The development of AI has raised concerns in the information and media landscape due to its ability to generate synthetic media, such as deepfakes. Deepfakes are videos or images that are artificially generated using AI, and they can be used to create realistic-looking forgeries of real people or events. This could be used to create fake news, spread propaganda or impersonate someone to steal personal data. This technology can be used to spread misinformation and disinformation and consequently lead to the erosion of trust in news and information sources.

Another way AI can generate information disorder is through the manipulation of online content and misinformation. Social media platforms are being used as a tool to spread false information and AI can be used to artificially amplify the reach and impact of this content. This can be done by using algorithms that target specific audiences with content designed to influence their opinions or actions. Additionally, AI can be used to generate fake comments, likes and followers in social media, creating fake engagement that can make certain information or accounts look more credible than they actually are.

Newsrooms themselves have begun to integrate AI technology to help them write news stories, as it can be used to automate the writing of certain types of news stories, such as sports scores, stock market updates and weather reports. AI systems can be trained to analyze large amounts of data, such as news articles, social media posts, and financial data, and then automatically generate news stories based on that analysis.

The process of writing news with AI can vary depending on the specific system and implementation, but in general, it involves training the AI model on a large dataset of relevant news articles and providing the AI system with a specific topic or event to write about. The AI system then uses natural language processing techniques to analyze the data and generate text that resembles a human-written news article. 

There are several ethical implications to consider when using AI to write news stories. One issue is the potential for bias, as AI systems are only as good as the data they are trained on, and if the data contains biases, the AI system will replicate those biases in its output. Additionally, using AI to write news stories may lead to a loss of human expertise and the ability to convey the context of a story, as well as the risk of spreading misinformation or disinformation by mistaking fake information for real.

Another important aspect is accountability. As AI-generated news may be indistinguishable from human-written articles, it could be difficult to hold anyone accountable for inaccuracies or biases in the content.

However, it is helpful for journalists to fact-check AI-written news before publishing them. Fact-checking can help to ensure that the information in the news story is accurate and reliable, and can also help to identify any potential biases or inaccuracies in the AI-generated content.

Additionally, AI-generated news should be clearly labeled as such and be transparent about how it was generated to help readers make informed decisions about the credibility and the reliability of the information.

The word from the ‘author’

Apart from this section, the rest of the article  — excluding titles and subtitles — has been written by ChatGPT based on my questions and I am having trouble deciding who should get the byline, me or the computer.

I checked the article on Crossplag, an online platform that claims to detect AI-written content, and the results said that “this text was mainly written by AI.”

Usually, the research for an article like this takes me up to one week, but with ChatGPT I spent only 5 hours posing questions and another evening arranging its responses to have a proper structure. I spent the second day fact-checking all ChatGPT information and claims. And guess what ChatGPT falsely claimed? Well, IBM 704 was not exactly a computer system that was capable of playing chess. Rather it was the first computer system for which a complete chess program was developed.

Some of the limitations of ChatGPT are that it has no information beyond 2021 and when I asked it to read a few of my recent explainers to adapt to my writing style, it couldn’t read websites and documents. And yes, it stopped me every hour to let me know that I’ve asked too many questions within that timeline and sometimes it stopped working due to high demand.

How did it help my working process? Well, as a start it spared my browser from having 50 tabs open with dozens of documents that I would otherwise read and highlight to write an explainer article. I couldn’t write this in Albanian originally as ChatGPT’s primary language is English and the information it provides in other languages is not at the same level of accuracy.

What do I feel I missed from my usual process of researching and writing? It stole the joy of researching and writing — which are processes that media nerds enjoy. I felt getting quick answers based on my questions deprived me of having a wider perspective on the topic I was trying to explain. The research process involves reading a wide body of work on a topic — sometimes even starting from the basics. While it is impossible to read everything on a topic, you still get exposed to a diverse range of perspectives and then decide on your own editorial approach.

I have been curious about AI for a long time and had done some reading before ChatGPT was launched, which helped inform my questions. Otherwise, for example, if I hadn’t asked ChatGPT about the downside of AI or what are people’s main concerns regarding AI, I am not sure ChatGPT would have mentioned any. ChatGPT gets you the answers you ask for, but in journalism what is crucial is to get the questions right. 

When reading about how newsrooms across the world were using AI, I wondered how this would play out in Kosovo’s media landscape. What if an outlet decided to save time by asking AI to write daily news and use their journalists for actually writing more in-depth stories? The problem is that if the media in Kosovo trained an AI product from the existing news environment, the quality would be the same — poor. This is because, by and large, Kosovo’s media landscape is dominated by tabloidy and sensationalist journalism. 

But what if credible outlets took this step rather than the ones that pollute the information landscape by skipping fact-checking, context and yes, sometimes even the truth? What if disinformation agents want to use AI to train it for their purposes and get disinformation one step further? An AI that writes “fake news” in seconds, it sounds terrifying, doesn’t it?

This highlights another concern in the field of AI. Most products for public consumption have some oversight and standards about safety, efficacy and security. For AI, you really don’t have any regulations. Anyone can launch an AI product and make it available to everyone.

This could exacerbate already existing digital gaps. Whereas there are many who don’t have the digital literacy skills to identify disinformation, AI could supercharge malevolent actors’ ability to flood the media ecosystem with false or damaging information. It will reinforce inequalities and leave people behind in accessing information and even jobs.

Other than these big open discussions — to which I will definitely go back to in other articles — I can say that I enjoyed this experiment. Yes, ChatGPT is repetitive and the explanations are generic, if not boring at some point. However, it certainly can save you time you could then use for other tasks that can’t (yet) be AI-assisted. 

In the end, as long as it is properly fact-checked prior to publication, for journalists and newsrooms, AI can be a friend and will certainly not replace them — because what makes a journalist goes beyond merely gathering information. 

Feature image: Dina Hajrullahu / K2.0.

Editor’s note: An earlier version incorrectly used “ChatGPT3” instead of “ChatGPT.” The mistake has been corrected.