The Race To Generative AI

As artificial intelligence (AI) continues to weave its way in our lives, through such avenues as self-driving cars and drones, or more recently through the cultural shift around a vast surfacing term called Generative AI – made primarily of LLMs (Large Language Models). Since the production of ChatGPT, LLMs have become extremely popular for a whole range of designs, such as data exploration and artificially intelligent inventions, which have the capacity to provide support for exploration and research purposes. It has not only made preliminary research on topics quicker but the personalised response it can generate has made it convenient and simpler than ever to get a task done. Additionally, it has encouraged companies and data professionals to build more efficient intelligent tools to mimic the effects of existing LLMs, which generates the perspective of a race in achieving full operated AI systems in every aspect of our lives from financial decisions to medical diagnostics. It is certainly a supremely impressive area of technology, but as said, anything misused can cause more damage than not being used to the full potential, where this particular AI development can have adverse repercussions in ethical and social spheres.

The following are some key ethical concerns attached to AI, where this blog will shed a little light on the complex territories of LLMs that must be explored at the same pace as the building and growth of these technologies: 

  • Invasions of Privacy or Copyright Violation

  • Fairness - Bias and Discrimination

  • Transparency - Non transparent, Unexplainable, or Unjustifiable

  • Denial of Individual Autonomy, Recourse, and Rights Outcomes

  • Isolation and Disintegration of Social Connection

  • Unreliable, Unsafe, or Poor-Quality Outcomes

The first and foremost concern could be about the data used to train these LLMs; what data has been used, was it just open source or is the data used under copy right violation? As it is cited in an article ‘LLMs train on the text data from various users to continually build its knowledge base, raising the risk that personal information may be exposed’, how are developers ensuring that it would not invade privacy and breach confidentiality? This leads to the second question, how the data is being collected, do the data sources reflect a diverse spectrum of society? Where the most common and simplest type of bias could be the model associating certain professions to a specific gender. Another consideration for this is if there is an automated decisioning system to approve loans in a banks and it is approving more for men than for women just because the trained data had more ratio of men being approved for loans previously.

Another dimension to consider could be how do we ensure that the inventors of these AI had communicated and involved those directly impacted by their systems? Were there any observations or analysis being held/conducted in terms of social, ethical, and environmental impacts other than legal bindings? For instance, an opinion or analysis of a psychologist, socialist, or environmentalist.

To ensure continuous development towards responsible generative AI, the interpretability of LLMs is equally important. It is not just the question of government regulating the tools but also how do industry, academia, NGOs, and government come together to understand:

  • inner workings of generative AI,

  • to not actually impact innovation but safe and informed use of it,

  • impact to employment in terms of fairness and bias or how they can be inculcated in profession to make jobs easier for humans

  • legal repercussions considering copy right violations, most important how are they going to measure and assess the violation being content easily available on internet. It could be challenging and tricky as people posting on social media and signing/agreeing on proving their data to the companies for their use.

  • Not only the human responsibility while building them but ensuring the responsible use of generative AI.

However, this points to a bigger question here, how is this technology being followed and adopted, what ratio of public being aware of ethical boundaries and their responsibility towards it, for instance, what they can put in these models while using them. Is that ratio equal to the users of generative AI.

Previous
Previous

“Who's winning?" at Cricket: Communicating Machine Learning Algorithms

Next
Next

The Pareto Principle: How To Deal with Imposter Syndrome