Artificial intelligence (AI) is the apex of today’s technology age as we push the boundaries towards Industry 4.0 and superintelligence. This is both exciting and scary for many companies in the world.
It is already here in disruptive narrow forms, cutting across the economy and leaving no sector untouched. It has the potential to solve many of our biggest and most persistent problems.
While there is much to be excited about, there are also critical questions that we must ask from a policy and legal perspective. An interesting article on cover.co.za pointed to what this new world may look like.
The article points out that economic drivers constantly push for the optimisation of capital and labour. Everything that can be digitised and automated, will be. Historically we’ve been good at creating new jobs, but this will not necessarily be the case here – at least not to the same degree.
We will need to upskill or pivot to remain relevant in the future of world of work. A World Economic Forum study reports that creativity, emotional intelligence and cognitive flexibility will be among the most valued competencies by 2020.
The article adds that if there is widespread job loss, then we need to consider how to support the unemployed. Should we provide for a universal basic income? Will we spend our time constructively without work? Many of us derive purpose from our work, so how will this impact our self-worth and happiness.
AI will widen the wealth gap, concentrating wealth among AI companies. Should these companies be taxed to fund social grants? We are already digitally obese “cyborgs” attached to our technology, so it’s likely that AI augmentation of brains and bodies will be in demand. The wealthy will disproportionately reap these benefits. Since intelligence also provides power and opportunities, should access to AI become a basic human right?
And when, if ever, should machines be recognised as deserving of “humane” treatment and legal rights? Should it depend on their levels of perception, feeling and understanding?
Like any human or system, AI is not infallible. But the adverse consequences of defective AI compound dramatically as we place more reliance on AI.
The article points out that we are increasingly delegating decisions that affect our lives and livelihoods to imperfect systems. We should demand transparency about how those decisions are reached when automated systems can decide who receives parole or not, and who lives or dies (think: self-driving cars accident situations, medical diagnostics, and autonomous weapons).
The article points out that powerful AI could land in the wrong hands. AI can be hacked to access valuable data pools and repurposed for nefarious means. And with the military investing in autonomous weapons, an arms race has started.
A related issue is control. The current direction of AI research is on machine-learning systems that can self-learn and take action without human input, intervention or oversight.
The article adds that this is a problem: if humans don’t have control or veto rights over increasingly intelligent and pervasive AI (or control is restricted to a few elite individuals), then we could face serious unintended worst-case scenarios far beyond science fiction and killer robots.
Regulation responds to the ethics and concerns of society, and our law will need to address the AI policy issues and risk areas. Given AI’s positive transformative potential, we should avoid overregulation that unduly restricts innovation.
The article points out that the work should begin now. Adopting a wait and-see approach before imposing regulation would be unwise, as even one big mistake could have dire consequences for our future.
Existing laws will need to be applied to address liability for defective, unsafe, maliciously repurposed and rogue AI. The difficulty is that the legal tests often require a determination of “reasonableness” and “wrongfulness”, which are tricky to determine in a world where it is accepted that (i) no system is error-free or completely secure from unauthorised access despite best efforts, and (ii) successful AI research and development (R&D) is linked to increasing automation and reducing human control and intervention.
The article adds that one clear area for regulation is to circumscribe the conditions for safe AI R&D. Microsoft proposes conditions that include design robustness, transparency of operation, data privacy, accountability and preventing bias. This is a good place to start.
The current laws don’t go far enough to deal with the nuances of this technology. An example is who owns, or should own, the intellectual property created by AI. Should this be the manufacturer or user of the system? This will be an essential question to answer in practice.
While regulation catches up, contracts should be carefully drafted to plug legal gaps, limit liability and appropriately allocate risk. Companies should develop internal policies and good corporate governance structures to record AI risks and judiciously manage AI implementation.
All things considered, there is no doubt that finding balanced and meaningful responses to AI issues will be among the most complex, urgent and fundamental tasks for our regulators in the coming years.
Switching insurance providers has never been easier for consumers than it is in the digital age. A price comparison website can provide multiple quotes in seconds. Another article on cover.co.za painted a perfect picture of this.
The article points out that a quick trip over to an insurer’s Facebook Page or Hellopeter profile condenses thousands of word-of-mouth customer experiences into a quick online read, and more and more advanced chatbots are an omnipresent force on practically all insurers’ websites, able to answer customer questions with no human intervention at all.
It’s safe to say that all these developments were not dreamt up by the businesses themselves – they were demanded by increasingly switched-on and digitally minded consumers, and the insurers that are excelling today (not to mention those that will thrive in the future), are those that take those demands seriously.
The article adds that what was once a drawn-out process of choosing a policy and filling out forms can now be done without so much as a phone call. Consumers are always-on. They are digitally savvy, and their service-level expectations are constantly rising. Today’s top insurers are those who have adjusted their business models to match, and are making use of every digital tool at their disposal to ensure a competitive edge. Here are just three ways the insurance industry is catering to the customer of the future.
On top of being one of the most widespread advertising techniques in the industry today, social media is being used for a wide variety of functions by consumer and insurer alike.
The article points out that a simple inbox message or even a post comment can turn a lead into a customer if handled correctly – just as a single mismanaged negative comment from a policy holder can irreparably affect a business’s corporate image. Social media also allows for a better understanding – and therefore more effective targeting – of potential policy holders. The worlds of the consumer and the insurer are becoming more blurred together by the day.
With such a complex mix of risk and opportunity, the savvy insurer will likely be placing far more emphasis on the management of a wide array of social media channels – and it’s already happening. Research by LIMRA, the world’s largest association of life insurance and financial services companies, discovered that 93% of life insurance companies had social media programs in place.
The article points out that gone are the trusted old call centers of yesteryear. Today’s top insurers have contact centers that boast enormous data storage and computing power, all in the name of a more personalised customer experience. Modern PBX systems can route a customer’s call to exactly the right person to handle their query with no receptionist intervention necessary.
The article adds that agents are able to call up a customer’s client history and risk profile at a moment’s notice, and offer tailored premiums based on that data in a heartbeat. The ability to record calls and gather data about their duration and outcomes also allows insurers to better train their customer service agents for a more seamless experience, and extremely detailed reporting capabilities allow for improvements to the whole process on a month-by-month basis.
Digital technologies are helping the development of the insurance sector in the areas of speed, relevance, context, personalization and empathy. And it can only be to the consumer’s advantage that insurers are adjusting to these in order to close the gap between what customers want and what they are delivering.