Add Your Heading Text Here
What constitutes ‘fair use’ of data is increasingly coming under scrutiny by regulators across the world. With the digital detonation that has been unleashed in the past few years, leading to a deluge of data – organisations globally have jumped at the prospect of achieving competitive advantage through more refined data mining methods. In the race for mining every bit of data possible and using it to inform and improve algorithmic models, we have lost sight of what data we should be collecting and processing. There also seems to be a deficit of attention to what constitutes a breach and how offending parties should be identified and prosecuted for unfair use.
There’s growing rhetoric that all these questions be astutely addressed through a regulation of some form. With examples of detrimental use of data surfacing regularly, businesses, individuals and society at large are demanding an answer for exactly what data can be collected – and how it should be aggregated, stored, managed and processed.
If data is indeed the new oil, we need to have a strong understanding of what constitutes the fair use of this invaluable resource. This article attempts to highlight India’s stance on triggering regulatory measures to govern the use of data.Importance of Data Governance
Importance of Data Governance
Before we try to get into what data governance should mean in the Indian context, let us first look at the definition of data governance and why it is an important field of study to wrap our head around.
In simple terms, data governance is the framework that lays down the strategy of how data is used and managed within an organisation. Data governance leaders must stay abreast of the legal and regulatory frameworks specific to the geographies that they operate in and ensure that their organisations are compliant with the rules and regulations. A lot of their effort at present is aimed at maintaining the sanctity of organisational data and ensuring that it does not fall in the wrong hands. As such, the amount of time and effort expended on ensuring that these norms are adequately adhered to is contingent upon the risk associated with a potential breach or loss of data.
In effect, a framework of data governance is intended to ensure that a certain set of rules is applied and enforced to ensure that data is used in the right perspective within an organisation.
Data Governance in Indian Context
India is rapidly moving towards digitisation. Internet connectivity has exploded in the last few years, leading to rapid adoption of internet-enabled applications — social media, online shopping, digital wallets etc. The result of this increasing connectivity and adoption is a fast-growing digital footprint of Indian citizens. Add to this the Aadhaar programme proliferation and adoption – and we have almost every citizen that has personal digital footprint somewhere – codified in the form of data.
With a footprint of this magnitude, there is an element of risk attached. What if this data falls in the wrong hands? What if personal data is used to manipulate citizens? What are the protection mechanisms citizens have against potential overreach by stewards of the data themselves? It is time we found answers to these very pertinent questions – and data governance regulation is the way we will find comprehensive answers to these impending conversations
Perspectives for India
The pertinent departments are mulling over on a collective stand that should be taken while formulating data governance norms. For one, Indian citizens are protected by a recent Supreme Court ruling that privacy is a fundamental right. This has led to a heightened sense of urgency around arriving at a legislative framework for addressing genuine concerns around data protection and privacy, as well as cybersecurity.
As a result of these concerns, the Central government recently set up a committee of experts, led by Justice BN Srikrishna, tasked with formulating data governance norms. This committee is expected to maintain the delicate balance between protecting the privacy of citizens and fostering the growth of the digital economy simultaneously. Their initial work – legal deliberations and benchmarking activity against similar legal frameworks such as GDPR (General Data Protection Regulation) – has resulted in the identification of seven key principles around which any data protection framework needs to be built. Three of the most crucial pointers include:
1. Informed Consent: Consent is deemed to be an expression of human autonomy. While collecting personal data, it is critical that the users be informed adequately about the implications around how this data is intended to be used before capturing their express consent to provide this data
2. Data Minimisation: Data should not be collected indiscriminately. Data collected should be minimal and necessary for purposes for which the data is sought and other compatible purposes beneficial for the data subject.
3. Structured Enforcement: Enforcement of the data protection framework must be by a high-powered statutory authority with sufficient capacity. Without statutory authority, any remedial measures sought by citizens over data privacy infringement will be meaningless.
Striking the right balance between fostering an environment in which the digital economy can grow to its full potential, whilst protecting the rights of citizens is extremely difficult.
With a multitude of malafide parties today seeking to leverage personal data of citizens for malicious purposes, it is crucial that the government and the legal system set out a framework that protects the sovereignty and interests of the people. By allaying fears of misuse of data, the digital economy will grow as people become less fearful and more enthusiastically contribute information where a meaningful end outcome can be achieved.
Add Your Heading Text Here
While some predict mass unemployment or all-out war between humans and artificial intelligence, others foresee a less bleak future. A future looks promising, in which humans and intelligent systems are inseparable, bound together in a continual exchange of information and goals, a “symbiotic autonomy.” If you may. It will be hard to distinguish human agency from automated assistance — but neither people nor software will be much use without the other.
Mutual Co-existence – A Symbiotic Autonomy
In the future, I believe that there will be a co-existence between humans and artificial intelligence systems that will be hopefully of service to humanity. These AI systems will involve software systems that handle the digital world, and also systems that move around in physical space, like drones, and robots, and autonomous cars, and also systems that process the physical space, like the Internet of Things.
I don’t think at AI will become an existential threat to humanity. Not that it’s impossible, but we would have to be very stupid to let that happen. Others have claimed that we would have to be very smart to prevent that from happening, but I don’t think it’s true.
If we are smart enough to build machine with super-human intelligence, chances are we will not be stupid enough to give them infinite power to destroy humanity. Also, there is a complete fallacy due to the fact that our only exposure to intelligence is through other humans. There are absolutely no reason that intelligent machines will even want to dominate the world and/or threaten humanity. The will to dominate is a very human one (and only for certain humans).
Even in humans, intelligence is not correlated with a desire for power. In fact, current events tell us that the thirst for power can be excessive (and somewhat successful) in people with limited intelligence.
You will have more intelligent systems in the physical world, too — not just on your cell phone or computer, but physically present around us, processing and sensing information about the physical world and helping us with decisions that include knowing a lot about features of the physical world. As time goes by, we’ll also see these AI systems having an impact on broader problems in society: managing traffic in a big city, for instance; making complex predictions about the climate; supporting humans in the big decisions they have to make.
Intelligence of Accountability
A lot of companies are working hard on making machines to be able to explain themselves — to be accountable for the decisions they make, to be transparent. A lot of the research we do is letting humans or users query the system. When Cobot, my robot, arrives to my office slightly late, a person can ask , “Why are you late?” or “Which route did you take?”
So they are working on the ability for these AI systems to explain themselves, while they learn, while they improve, in order to provide explanations with different levels of detail. People want to interact with these robots in ways that make us humans eventually trust AI systems more. You would like to be able to say, “Why are you saying that?” or “Why are you recommending this?” Providing that explanation is a lot of the research that is being done, and I believe robots being able to do that will lead to better understanding and trust in these AI systems. Eventually, through these interactions, humans are also going to be able to correct the AI systems. So they are trying to incorporate these corrections and have the systems learn from instruction. I think that’s a big part of our ability to coexist with these AI systems.
The Worst Case Contingency
A lot of the bad things humans do to each other are very specific to human nature. Behavior like becoming violent when we feel threatened, being jealous, wanting exclusive access to resources, preferring our next of kin to strangers, etc were built into us by evolution for the survival of the species. Intelligent machines will not have these basic behavior unless we explicitly build these behaviors into them. Why would we?
Also, if someone deliberately builds a dangerous and generally-intelligent AI, other will be able to build a second, narrower AI whose only purpose will be to destroy the first one. If both AIs have access to the same amount of computing resources, the second one will win, just like a tiger a shark or a virus can kill a human of superior intelligence.
In October 2014, Musk ignited a global discussion on the perils of artificial intelligence. Humans might be doomed if we make machines that are smarter than us, Musk warned. He called artificial intelligence our greatest existential threat.
Musk explained that his attempt to sound the alarm on artificial intelligence didn’t have an impact, so he decided to try to develop artificial intelligence in a way that will have a positive affect on humanity
Brain-machine interfaces could overhaul what it means to be human and how we live. Today, technology is implanted in brains in very limited cases, such as to treat Parkinson’s Disease. Musk wants to go farther, creating a robust plug-in for our brains that every human could use. The brain plug-in would connect to the cloud, allowing anyone with a device to immediately share thoughts.
Humans could communicate without having to talk, call, email or text. Colleagues scattered throughout the globe could brainstorm via a mindmeld. Learning would be instantaneous. Entertainment would be any experience we desired. Ideas and experiences could be shared from brain to brain.
We would be living in virtual reality, without having to wear cumbersome goggles. You could re-live a friend’s trip to Antarctica — hearing the sound of penguins, feeling the cold ice — all while your body sits on your couch.
Final Word – Is AI Uncertainty really about AI ?
I think that the research that is being done on autonomous systems — autonomous cars, autonomous robots — it’s a call to humanity to be responsible. In some sense, it has nothing to do with the AI. The technology will be developed. It was invented by us — by humans. It didn’t come from the sky. It’s our own discovery. It’s the human mind that conceived such technology, and it’s up to the human mind also to make good use of it.
I’m optimistic because I really think that humanity is aware that they need to handle this technology carefully. It’s a question of being responsible, just like being responsible with any other technology every conceived, including the potentially devastating ones like nuclear armaments. But the best thing to do is invest in education. Leave the robots alone. The robots will keep getting better, but focus on education, people knowing each other, caring for each other. Caring for the advancement of society. Caring for the advancement of Earth, of nature, improving science. There are so many things we can get involved in as humankind that could make good use of this technology we’re developing