<p>Bengaluru: For a few weeks this year, villagers in the southwestern Indian state of Karnataka read out dozens of sentences in their native Kannada language into an app as part of a project to build the country's first AI-based chatbot for Tuberculosis.</p><p>There are more than 40 million native Kannada speakers in India, and it is one of the country's 22 official languages and one of over 121 languages spoken by 10,000 people or more in the world's most populous nation.</p><p>But few of these languages are covered by natural language processing (NLP), the branch of artificial intelligence that enables computers to understand text and spoken words.</p><p>Hundreds of millions of Indians are thus excluded from useful information and many economic opportunities.</p><p>"For AI tools to work for everyone, they need to also cater to people who don't speak English or French or Spanish," said Kalika Bali, principal researcher at Microsoft Research India.</p><p>"But if we had to collect as much data in Indian languages as went into a large language model like GPT, we'd be waiting another 10 years. So what we can do is create layers on top of generative AI models such as ChatGPT or Llama," Bali told the Thomson Reuters Foundation.</p><p>The villagers in Karnataka are among thousands of speakers of different Indian languages generating speech data for tech firm Karya, which is building datasets for firms such as Microsoft and Google to use in AI models for education, healthcare and other services.</p><p>The Indian government, which aims to deliver more services digitally, is also building language datasets through Bhashini, an AI-led language translation system that is creating open source datasets in local languages for creating AI tools.</p><p>The platform includes a crowdsourcing initiative for people to contribute sentences in various languages, validate audio or text transcribed by others, translate texts and label images.</p><p>Tens of thousands of Indians have contributed to Bhashini.</p><p>"The government is pushing very strongly to create datasets to train large language models in Indian languages, and these are already in use in translation tools for education, tourism and in the courts," said Pushpak Bhattacharyya, head of the Computation for Indian Language Technology Lab in Mumbai.</p><p>"But there are many challenges: Indian languages mainly have an oral tradition, electronic records are not plentiful, and there is a lot of code mixing. Also, to collect data in less common languages is hard, and requires a special effort."</p> .Navigating the uncharted: AI tide sweeps India.<p><strong>Economic value</strong></p><p>Of the more than 7,000 living languages in the world, fewer than 100 are captured in major NLPs, with English the most advanced.</p><p>ChatGPT - whose launch last year triggered a wave of interest in generative AI - is trained primarily on English. Google's Bard is limited to English, and of the nine languages that Amazon's Alexa can respond to, only three are non-European; Arabic, Hindi and Japanese.</p><p>Governments and startups are trying to bridge this gap.</p><p>Grassroots organisation Masakhane aims to strengthen NLP research in African languages, while in the United Arab Emirates, a new large language model called Jais can power generative AI applications in Arabic.</p><p>For a country like India, crowdsourcing is an effective way to collect speech and language data, said Bali, who was named among the 100 most influential people in AI by Time magazine in September.</p><p>"Crowdsourcing also helps to capture linguistic, cultural and socio-economic nuances," said Bali.</p><p>"But there has to be awareness of gender, ethnic and socio-economic bias, and it has to be done ethically, by educating the workers, paying them, and making a specific effort to collect smaller languages," she said. "Otherwise it doesn't scale."</p><p>With the rapid growth of AI, there is demand for languages "we haven't even heard of", including from academics looking to preserve them, said Karya co-founder Safiya Husain.</p><p>Karya works with non-profit organisations to identify workers who are below the poverty line, or with an annual income of less than $325, and pays them about $5 an hour to generate data - well above the minimum wage in India.</p><p>Workers own a part of the data they generate so they can earn royalties, and there is potential to build AI products for the community with that data, in areas such as healthcare and farming, Husain said.</p><p>"We see huge potential for adding economic value with speech data - an hour of Odia speech data used to cost about $3-$4, now it's $40," she said, referring to the language of eastern Odisha state.</p>. <p><strong>Village voice</strong> </p><p>Fewer than 11% of India's 1.4 billion people speak English. Much of the population is not comfortable reading and writing, so several AI models focus on speech and speech recognition.</p><p>Google-funded Project Vaani, or voice, is collecting speech data of about 1 million Indians and open-sourcing it for use in automatic speech recognition and speech-to-speech translation.</p><p>Bengaluru-based EkStep Foundation's AI-based translation tools are used at the Supreme Court in India and Bangladesh, while the government-backed AI4Bharat centre has launched Jugalbandi, an AI-based chatbot that can answer questions on welfare schemes in several Indian languages.</p><p>The bot, named after a duet where two musicians riff off each other, uses language models from AI4Bharat and reasoning models from Microsoft, and can be accessed on WhatsApp, which is used by about 500 million people in India.</p><p>Gram Vaani, or voice of the village, a social enterprise that works with farmers, also uses AI-based chatbots to respond to questions on welfare benefits.</p><p>"Automatic speech recognition technologies are helping to mitigate language barriers and provide outreach at the grassroots level," said Shubhmoy Kumar Garg, a product lead at Gram Vaani.</p><p>"They will help empower communities which need them the most."</p><p>For Swarnalata Nayak in Raghurajpur district in Odisha, the growing demand for speech data in her native Odia has also meant a much-needed additional income from her work for Karya.</p><p>"I do the work at night, when I am free. I can provide for my family through talking on the phone," she said.</p>
<p>Bengaluru: For a few weeks this year, villagers in the southwestern Indian state of Karnataka read out dozens of sentences in their native Kannada language into an app as part of a project to build the country's first AI-based chatbot for Tuberculosis.</p><p>There are more than 40 million native Kannada speakers in India, and it is one of the country's 22 official languages and one of over 121 languages spoken by 10,000 people or more in the world's most populous nation.</p><p>But few of these languages are covered by natural language processing (NLP), the branch of artificial intelligence that enables computers to understand text and spoken words.</p><p>Hundreds of millions of Indians are thus excluded from useful information and many economic opportunities.</p><p>"For AI tools to work for everyone, they need to also cater to people who don't speak English or French or Spanish," said Kalika Bali, principal researcher at Microsoft Research India.</p><p>"But if we had to collect as much data in Indian languages as went into a large language model like GPT, we'd be waiting another 10 years. So what we can do is create layers on top of generative AI models such as ChatGPT or Llama," Bali told the Thomson Reuters Foundation.</p><p>The villagers in Karnataka are among thousands of speakers of different Indian languages generating speech data for tech firm Karya, which is building datasets for firms such as Microsoft and Google to use in AI models for education, healthcare and other services.</p><p>The Indian government, which aims to deliver more services digitally, is also building language datasets through Bhashini, an AI-led language translation system that is creating open source datasets in local languages for creating AI tools.</p><p>The platform includes a crowdsourcing initiative for people to contribute sentences in various languages, validate audio or text transcribed by others, translate texts and label images.</p><p>Tens of thousands of Indians have contributed to Bhashini.</p><p>"The government is pushing very strongly to create datasets to train large language models in Indian languages, and these are already in use in translation tools for education, tourism and in the courts," said Pushpak Bhattacharyya, head of the Computation for Indian Language Technology Lab in Mumbai.</p><p>"But there are many challenges: Indian languages mainly have an oral tradition, electronic records are not plentiful, and there is a lot of code mixing. Also, to collect data in less common languages is hard, and requires a special effort."</p> .Navigating the uncharted: AI tide sweeps India.<p><strong>Economic value</strong></p><p>Of the more than 7,000 living languages in the world, fewer than 100 are captured in major NLPs, with English the most advanced.</p><p>ChatGPT - whose launch last year triggered a wave of interest in generative AI - is trained primarily on English. Google's Bard is limited to English, and of the nine languages that Amazon's Alexa can respond to, only three are non-European; Arabic, Hindi and Japanese.</p><p>Governments and startups are trying to bridge this gap.</p><p>Grassroots organisation Masakhane aims to strengthen NLP research in African languages, while in the United Arab Emirates, a new large language model called Jais can power generative AI applications in Arabic.</p><p>For a country like India, crowdsourcing is an effective way to collect speech and language data, said Bali, who was named among the 100 most influential people in AI by Time magazine in September.</p><p>"Crowdsourcing also helps to capture linguistic, cultural and socio-economic nuances," said Bali.</p><p>"But there has to be awareness of gender, ethnic and socio-economic bias, and it has to be done ethically, by educating the workers, paying them, and making a specific effort to collect smaller languages," she said. "Otherwise it doesn't scale."</p><p>With the rapid growth of AI, there is demand for languages "we haven't even heard of", including from academics looking to preserve them, said Karya co-founder Safiya Husain.</p><p>Karya works with non-profit organisations to identify workers who are below the poverty line, or with an annual income of less than $325, and pays them about $5 an hour to generate data - well above the minimum wage in India.</p><p>Workers own a part of the data they generate so they can earn royalties, and there is potential to build AI products for the community with that data, in areas such as healthcare and farming, Husain said.</p><p>"We see huge potential for adding economic value with speech data - an hour of Odia speech data used to cost about $3-$4, now it's $40," she said, referring to the language of eastern Odisha state.</p>. <p><strong>Village voice</strong> </p><p>Fewer than 11% of India's 1.4 billion people speak English. Much of the population is not comfortable reading and writing, so several AI models focus on speech and speech recognition.</p><p>Google-funded Project Vaani, or voice, is collecting speech data of about 1 million Indians and open-sourcing it for use in automatic speech recognition and speech-to-speech translation.</p><p>Bengaluru-based EkStep Foundation's AI-based translation tools are used at the Supreme Court in India and Bangladesh, while the government-backed AI4Bharat centre has launched Jugalbandi, an AI-based chatbot that can answer questions on welfare schemes in several Indian languages.</p><p>The bot, named after a duet where two musicians riff off each other, uses language models from AI4Bharat and reasoning models from Microsoft, and can be accessed on WhatsApp, which is used by about 500 million people in India.</p><p>Gram Vaani, or voice of the village, a social enterprise that works with farmers, also uses AI-based chatbots to respond to questions on welfare benefits.</p><p>"Automatic speech recognition technologies are helping to mitigate language barriers and provide outreach at the grassroots level," said Shubhmoy Kumar Garg, a product lead at Gram Vaani.</p><p>"They will help empower communities which need them the most."</p><p>For Swarnalata Nayak in Raghurajpur district in Odisha, the growing demand for speech data in her native Odia has also meant a much-needed additional income from her work for Karya.</p><p>"I do the work at night, when I am free. I can provide for my family through talking on the phone," she said.</p>