This week, I had the privilege to attend Canada’s national AI conference run by the Digital Finance Institute. The conference uncovers topics like Artificial Intelligence, Quantum Computing, Machine Learning, Self-Driving Cars, Innovation, Financial Intelligence, AI Law, Chatbots, the Future of Banking with AI, etc. Here are my ‘polished’ notes from the full day:
The first open discussion, interestingly enough, focused on the challenges and threats of AI. As most of us already know, the most negative thing about AI is that by 2025, there’s expected to be a loss of approx. 110 million jobs (to put that into perspective, that’s almost as big as the populations of Japan or Russia). To solidify this projection further, is the fact that the US government did NOT disagree that 50% of its workforce will be lost to automation and AI. The significant losses will occur in the low and mid range jobs, this is where massive unemployment is real.
Threat #1) AI & robots will cause greater and massive wealth disparity between the rich and poor.
One counter argument to this threat is the fact that this technology is not meant to be built in labs by people in white coats, or in the hands of only the wealthy – fundamentally, this technology is meant to be easily accessibly and created by anyone, anywhere.
Of course, there are lots of unanswered implications, especially around the legal aspects. The technology is pretty much putting the responsibilities of ethics, law, and morality in the hands of 20-some year old developers. The difficulty in this, is creating a technology that can decide the fate of people (with the same conscience of a human being). A good example of this challenge is with the case of Autonomous Vehicles (AV). Take this scenario for instance: if a person trips onto the road in front of an oncoming AV, the car would have to either swerve into a barrier/curb (potentially killing the passengers), or continue driving straight (potentially killing the pedestrian). What decision should it take? If it does kill someone, who’s to blame? The technology? The mechanical engineers? The developers? It’s definitely a tough one to answer.
Threat #2) There’s no governing law around AI. The algorithms/coding being done is non-inclusive by non-lawyers, so there’s no reference to “first do no harm” in the coding of decision making in AI. This is the reason why many are scared of how the future will look like with unregulated robots, who have the potential to harm humans.
….alright, enough with the bad news. Here’s the good stuff.
The US said we can save approx. $12billion per year if we get smarter about how to cut costs, allocate assets, improve investments, and optimize currently inefficient systems. Think about how congested our streets have become, and how advantageous it would be for people and the green-economy if AI can solve traffic problems.
Asia is by far ahead of us. They’ve adopted chatbots for almost all lines of industries i.e. Chinas has an “Ask a Doctor”, “Ask an Accountant”, an “Ask a…” for pretty much everything.
What can Canada do? We should definitely identify areas where we’re already strong and transfer the knowledge. There are opportunities to further conversations based on MOUs, because to fully leverage the benefits of these technologies, they can’t be operated in isolation.
“Linear algebra is the crux of AI…looking back it, that course in university was a very important one.”
People are scared of AI for the wrong reasons. AI isn’t going to be the blender that comes in to replace the knife that cuts the vegetables, it’ll probably be the knife that collects data on how well the vegetables are being cut, and then over time optimizes the knife’s functionality to improve factors like speed, power, and accuracy. The end game is to make smarter products that can do better things at a higher level.
We often take things like recommendations on YouTube, or Siri’s voice recognition for granted, but it takes a lot of AI throughput/awareness to understand “that” i.e. tell me about “that, how do I do “that”, what does “that” do, how does “that” affect me, etc.
“We should be less intimidated by the possibilities these technologies have and more worried about creating mediocre technology which results in a mediocre future”
One of the things I found really fascinating was the endless amount of use cases that have been applied using AI. Here are some cool examples:
Cortex – an AI startup focused on creativity: if you think about it, AI’s most interesting challenge is creativity. We as humans, are shoved creativity down our throats at such an early age (toys, classroom activities, games, drawings, arts etc.). However, we are now beginning to redefine what creativity means. Creativity in today’s world could mean putting out a picture on Instagram with a funny caption that draws people’s attention to a new product a company is launching as part of their marketing campaign.
Technology has heavily impacted marketers. In 1940, marketing content was limited to channels like print, direct mail, radio, and television – so, volumes were manageable. However, new channels were eventually introduced, the dissemination of content grew exponentially, and today, global brands can no longer create enough effective content with human teams alone. The solution? Machine vision. It has become much cheaper and more accurate for AI to scan an image and understand what the picture means. For instance, if you present a picture of a jet ski, AI will zone in on the logo of the jet ski, show you where you can buy it, for how much, where the shops that sell it are located, which shop is closest to you, etc.
Did I scare you yet?
Well listen to this. There’s an AI that was fed a bunch of romance novels which developers then took and translated into code. This allowed the technology to generate countless stories simply from analyzing a picture i.e. the technology could contextually think, by itself, what the image was saying and output related content from its data brain.
Cortex for example, uses AI to enrich the marketing content of companies. They leverage big data, machine learning, and automation to find patterns, clusters, and trends to auto generate marketing calendars. These calendars tell companies exactly what days they should disseminate content, alerts on what competitors are doing, the budgets behind each piece of content, and even auto-populate the contents (picture, caption, suggested hashtags).
“Machine learning will be the basis and fundamentals of every successful huge IPO win in 5 years.” – Eric Schmidt at Google NEXT 2016
The promise of AI is to produce solutions to previously unsolved problems.
The video above was shown during the Conference, I was in awe. Google acquired DeepMind, a game machine learning platform, for $400M. In the video, it shows you how the platform uses AI to learn how to play Atari and improves itself to superhuman levels each time. Eventually, after a good amount of training, the technology realizes that digging a tunnel through the wall is the most effective technique to beat the game.
Another really neat use case was presented by a company called Shakespeare.ai which developed a Chrome extension that allows people to write personalized emails in an automated, efficient way. The founders recognized that a sales person needs to put a little effort into each email, especially when dealing with leads/prospects, to improve the reply and open rates. They also found that sales emails, in general, suck. Since this is an add-on to your Gmail, the extension automatically provides you all the research you need on the lead you’re emailing which you can use to better customize the message. They source the information from Google, LinkedIn, online sites, articles, and anything they can find on the person you’re interacting with.
Here are notes from two panels I found really interesting:
Innovation Leadership in AI (delivered by university professors from McGill University, University of Toronto, and University of Guelph):
One of the things Canada can do in the AI industry is establish critical mass, reduce barrier to entry for practitioners, and bring more scientists to the country.
Canada’s competitive advantage in AI is its research strength. However, we need to build that as quickly as possible and allow a larger flow of talent from universities to grow the labour ecosystem.
Stop the brain drain: there are lots of quality people leaving Canada to work in other countries. Companies have as much of a responsibility as the government to support the country’s adaptability to innovative technologies. We have to start early and encourage students to get engaged and exposed to the world of STEM (science, technology, engineering, and mathematics) – this is the responsibility of universities.
We also have to resolve the lack of resources and finances. In terms of resources, Canada has to increase infrastructure spend that would be used to improve the throughput of AI as it relates to GPUs (buying them fast enough and providing more processing power to the many research facilities and startups that need it). In terms of finances, Canada needs to find a way to become more aggressive from a capital markets standpoint and ensure start ups with good tech can raise financing without having to go to the US.
Banks – How Artificial Intelligence is Transforming Banking (delivered by the VP of Innovation at RBC and the VP of Innovation at CIBC):
RBC has 70-80K employees, one of the largest companies (by assets and profit) in Canada, has huge internal resources to tap into and find people who are interested in getting involved in AI.
“AI is going to be the thing that powers everything else.” – Andrew Ng, Founder of Coursera
Hopefully AI is not just about big data because it encompasses massive computing power, data that’s growing exponentially, so it’s not just a fad or trend, it’s a force that will transform everything which has a data element to it.
Technologies have been about to take all the jobs for more than 200 years, yet people act like this is unique to our era. To prove the point, below is an image of a New York Times newspaper from 1928 which discusses the “prevalence of unemployment with greatly increased industrial output”.
Another big question being asked is what skills are going to be required in the future? The easy answer is to say skills that specialize in STEM and AI, but it’s really more about having the right mindset – a growth mindset, one that is way more comfortable with ambiguity, change, intellectual agility, curiosity, and comfort with tech.