<p>Artificial Intelligence arrived with our computer age and has been around for about seventy years. It has traditionally focused on classifying and analysing data patterns to describe, diagnose, analyse, and predict outcomes and solutions. Though it occasionally competed with humans and came off better, it never posed a real threat to humankind. Generative AI in chatbots also arrived in the 1960s and could produce text, images, audio, and video files, plus synthetic data.</p>.<p>Around 2014, a machine learning algorithm called generative adversarial networks (GANS) was formulated that accurately authenticated audio and video recordings of real people. The first high-profile victim of this activity, later known as deepfakes, was none other than Republican Presidential candidate Donald Trump. A few words in many of his video speeches were manipulated by GANS to make him sound like a buffoon and a political imbecile.</p>.<p><strong>Also Read | <a href="https://www.deccanherald.com/science-and-environment/ai-robots-could-play-future-role-as-companions-in-care-homes-1234563.html" target="_blank">AI robots could play future role as companions in care homes</a></strong><br /> </p>.<p>The fake versions had over 90 per cent original content, with bits of manipulated text that were ridiculously incorrect but difficult to detect as fake. By 2020, Open AI had tested the skills of Generative AI in multiple formats, and by 2022, ChatGPT, with $10 billion in funding from Microsoft, had started stoking fears of Generative AI taking over the world. Apart from deepfakes, the capability of generative AI to replicate the original text, voice, graphics, and images has enabled large-scale academic plagiarism that will be extremely difficult to detect. Replication and plagiarism were never so easy. Generative AI powered by machine learning (ML), along with the publicity hype of Chat GPT, made its use widespread.</p>.<p>However, machine learning models that power generative AI are also being used to deploy malware across businesses and enterprise networks. Threat actors are using ML-driven TTPs (tactics, techniques, and procedures) to circumvent standard security protocols. Generative AI has made distributed denial of service (DDoS) attacks easier as it helps process millions of independent requests for services within seconds, overloading servers instantly. The malware could use variable AI models to mask its intention until it fulfils its purpose, thus nullifying standard cyber defence protocols. This increased both the quantity of attacks and the chances of detection.</p>.<p>The problem has become grave because every solution, from online gaming to automation in assembly lines, toy robots to surgical robot assistants, IoT to process controls, edutech solutions to chatbot helplines, medical procedures to missile guidance systems, uses machine learning systems to function. Now most of these ML systems are derived from open source software, which can be easily corrupted by malware powered by variable Generative AI models that can be masked and embedded in the software. Similarly, anyone who knows basic coding can write a brief prompt, generate a phishing email template with the help of Generative AI or Chat GPT, and send the malicious link to millions of users in minutes.</p>.<p>The total number of DDoS attacks after the pandemic grew annually by over 200 per cent in 2021 and 150 per cent in 2022 globally, and these attacks are becoming increasingly sophisticated and complex and cannot be detected by most anti-virus firewalls. In April 2023, ‘Anonymous Sudan’, a pro-Islamic hacker group, conducted simultaneous DDoS attacks on Delhi, Mumbai, Hyderabad, Goa, and Kochi airports. Other groups active in India are Chinese hacktivists like ‘Red Echo’ and ‘Mustang Panda’. According to Netscout, DDoS attacks against India have doubled since 2023, with attacks against banks, airports, the power sector, and industries. However, attacks from India have also increased by over 80 per cent, as India became the third largest source of DDoS attacks after the US and China, as per Cloudfare.</p>.<p>AI’s computational power is doubling every six to 10 months. With the advent of Generative AI, things may move faster. This is because in August 2022, a London-based startup called Stability AI released a text-to-image tool to the masses, giving key technology access to every developer to write and create art. By November, Stability AI was funded by Amazon, and Stability Diffusion was available on the AWS platform. That is when Open AI decided to unleash Chat GPT and DALL-E2.</p>.<p>Google upgraded Apprentice Bard to add a chat function to Google search, while Meta showcased its generative AI tools for Instagram and WhatsApp. Soon enough, Baidu, Tencent, and Alibaba had their own versions to match. The AI war between tech giants has just begun. While Amazon has developed CodeWhisperer, a generative AI model used to write code, it is not entering the fray directly but is providing the AWS platform for other developers. As its CEO, Adam Selipsky, says "It’s truly day one in generative AI". Independent developers could use the AWS platform to develop their Generative AI tools to stay ahead.</p>.<p>(The writer is a journalist and author of four books on the economy, banking, and tech)</p>
<p>Artificial Intelligence arrived with our computer age and has been around for about seventy years. It has traditionally focused on classifying and analysing data patterns to describe, diagnose, analyse, and predict outcomes and solutions. Though it occasionally competed with humans and came off better, it never posed a real threat to humankind. Generative AI in chatbots also arrived in the 1960s and could produce text, images, audio, and video files, plus synthetic data.</p>.<p>Around 2014, a machine learning algorithm called generative adversarial networks (GANS) was formulated that accurately authenticated audio and video recordings of real people. The first high-profile victim of this activity, later known as deepfakes, was none other than Republican Presidential candidate Donald Trump. A few words in many of his video speeches were manipulated by GANS to make him sound like a buffoon and a political imbecile.</p>.<p><strong>Also Read | <a href="https://www.deccanherald.com/science-and-environment/ai-robots-could-play-future-role-as-companions-in-care-homes-1234563.html" target="_blank">AI robots could play future role as companions in care homes</a></strong><br /> </p>.<p>The fake versions had over 90 per cent original content, with bits of manipulated text that were ridiculously incorrect but difficult to detect as fake. By 2020, Open AI had tested the skills of Generative AI in multiple formats, and by 2022, ChatGPT, with $10 billion in funding from Microsoft, had started stoking fears of Generative AI taking over the world. Apart from deepfakes, the capability of generative AI to replicate the original text, voice, graphics, and images has enabled large-scale academic plagiarism that will be extremely difficult to detect. Replication and plagiarism were never so easy. Generative AI powered by machine learning (ML), along with the publicity hype of Chat GPT, made its use widespread.</p>.<p>However, machine learning models that power generative AI are also being used to deploy malware across businesses and enterprise networks. Threat actors are using ML-driven TTPs (tactics, techniques, and procedures) to circumvent standard security protocols. Generative AI has made distributed denial of service (DDoS) attacks easier as it helps process millions of independent requests for services within seconds, overloading servers instantly. The malware could use variable AI models to mask its intention until it fulfils its purpose, thus nullifying standard cyber defence protocols. This increased both the quantity of attacks and the chances of detection.</p>.<p>The problem has become grave because every solution, from online gaming to automation in assembly lines, toy robots to surgical robot assistants, IoT to process controls, edutech solutions to chatbot helplines, medical procedures to missile guidance systems, uses machine learning systems to function. Now most of these ML systems are derived from open source software, which can be easily corrupted by malware powered by variable Generative AI models that can be masked and embedded in the software. Similarly, anyone who knows basic coding can write a brief prompt, generate a phishing email template with the help of Generative AI or Chat GPT, and send the malicious link to millions of users in minutes.</p>.<p>The total number of DDoS attacks after the pandemic grew annually by over 200 per cent in 2021 and 150 per cent in 2022 globally, and these attacks are becoming increasingly sophisticated and complex and cannot be detected by most anti-virus firewalls. In April 2023, ‘Anonymous Sudan’, a pro-Islamic hacker group, conducted simultaneous DDoS attacks on Delhi, Mumbai, Hyderabad, Goa, and Kochi airports. Other groups active in India are Chinese hacktivists like ‘Red Echo’ and ‘Mustang Panda’. According to Netscout, DDoS attacks against India have doubled since 2023, with attacks against banks, airports, the power sector, and industries. However, attacks from India have also increased by over 80 per cent, as India became the third largest source of DDoS attacks after the US and China, as per Cloudfare.</p>.<p>AI’s computational power is doubling every six to 10 months. With the advent of Generative AI, things may move faster. This is because in August 2022, a London-based startup called Stability AI released a text-to-image tool to the masses, giving key technology access to every developer to write and create art. By November, Stability AI was funded by Amazon, and Stability Diffusion was available on the AWS platform. That is when Open AI decided to unleash Chat GPT and DALL-E2.</p>.<p>Google upgraded Apprentice Bard to add a chat function to Google search, while Meta showcased its generative AI tools for Instagram and WhatsApp. Soon enough, Baidu, Tencent, and Alibaba had their own versions to match. The AI war between tech giants has just begun. While Amazon has developed CodeWhisperer, a generative AI model used to write code, it is not entering the fray directly but is providing the AWS platform for other developers. As its CEO, Adam Selipsky, says "It’s truly day one in generative AI". Independent developers could use the AWS platform to develop their Generative AI tools to stay ahead.</p>.<p>(The writer is a journalist and author of four books on the economy, banking, and tech)</p>