This course is from edX, scroll down & click “Read More” for more informations.
About this course
FinTech has started a global revolution in the financial services industry, and the transformation will only increase in coming years. There are many ways in which FinTech can improve the lives of people around the world; however, those same technologies can also be used to enslave, coerce, track, and control people. Accordingly, it is appropriate and necessary to consider the implications of the introduction of these technologies so that they are utilized properly, regulated sufficiently, and their adoption does not come at the expense of societal growth.
This 6-week online coursecovers 6 modules, representing the full spectrum of finance, technology, and the introduction of FinTech solutions globally. We will ask questions that are not often asked or addressed when new technologies are adopted. Why should we adopt FinTech solutions, and what are the best ways to introduce disruptive technologies? How does blockchain technology change the way we provide financial services, and how should blockchain technology be governed? Is FinTech creating risks in cybersecurity and how can technology help us prevent financial crimes? As Artificial Intelligence (AI) is developed and adopted, will human biases and prejudices be built into such mechanisms? And at a larger scope, should FinTech lead to a decentralized, democratized system of finance, or will existing institutions adopt FinTech strategies to cement their existing hold on the financial markets?
Through discussing and attempting to answer these questions, you will understand better how the introduction of these technologies can benefit or harm society. And through considering the proper application or introduction of such technologies, you will learn to make better decisions as an individual and organization when facing the question: is FinTech our savior or a villain?
What you’ll learn
Understand the ethical elements of finance, emerging technologies, and FinTech. Identify trends and opportunities that will shape the future of FinTech. Critically examine implications of Artificial Intelligence (AI), blockchain and cryptocurrencies (including ICOs). Understand how Regulatory Technology (RegTech) enhances supervision and reduces compliance-related costs. Understand how payment solutions are evolving and the potential ethical implications. Understand how alternative financing, including crowdfunding and P2P lending, are impacting markets. Analyze positive and negative aspects of the introduction and expansion of FinTech. Syllabus
Introduction: Ethics of Finance and Emerging Technologies This module will provide a historical and broad perspective of ethical issues relating to finance and the introduction or adoption of emerging technologies.
Blockchain and its Governance This module will expand off the Introduction to FinTech course (https://www.edx.org/course/introduction-to-fintech), to consider the most relevant and ethical ways such technology should be implemented, in a number of different industries or product segmentation. In particular, data collection, customer privacy, and transactional issues will be covered in this module.
Cybersecurity & Crimes FinTech can make it easier and cheaper for banks to monitor and control financial transactions, thus reducing fraud and reducing bank costs. But at the same time, these tools can be used to steal money and other corporate secrets, hide illegality (including purchases of weapons, drugs, etc.), and finance terrorists and other criminal organizations. Accordingly, this module will consider the implications of such important issues.
AI & FinTech In this module we will consider the implications of building our own concepts of “human” morality into amoral machines, as well as a consideration of whether human biases and prejudices can or will be built into such mechanisms, whether purposefully or unintentionally.
Institutionalization vs. Decentralization One of the key reasons people are calling for FinTech is for its decentralized nature, thus democratizing finance, and allowing regular people to participate more fully and affordably in financial transactions through technologies like cryptocurrencies, non-government issued IDs, and P2P lending. In this module we will address some large questions, considering whether FinTech should lead to a decentralized, democratized system of finance. Or whether existing institutions will adopt FinTech strategies to cement their existing hold on the financial markets.
Big Questions Relating to the Introduction of FinTech In this final module, we will consider some of the many outstanding questions and purposes of introducing FinTech to the world, exploring the many ways that FinTech can both help and hurt society. We will discuss financial inclusion, sustainable development, and many other positive aspects of FinTech development. Conversely, we will also consider how these same technologies and solutions could potentially be used to inhibit access to financial markets, or worse.
Welcome and Course Administration Welcome to FinTech Ethics and Risks
Hey, listen. We have a decision to make. Humanity stands on the edge of a massive shift in technology and productivity that is going to fundamentally alter our lives. Blockchain, big data, artificial intelligence, these buzzword technologies are rapidly changing our world, just like the steam engine that started the Industrial Revolution. Over the past century, new technologies have changed how we work and even how we define work. During that time, the average number of hours worked has steadily declined in the developed world, but lifestyles have generally improved. And the technologies on the horizon look to completely alter society as we know it. So here’s the cool part. If we get the next 10 years right, humankind could be well on its way to reaching the type of Utopian existence characterised in many stories about the future. This is especially true in the area of financial technology. And while maybe not as sexy or cool as driverless cars, advancements in FinTech will make it easier to send, receive and invest money. These are at the core of business and commerce, and FinTech stands to alter these interactions completely. But before we dive headfirst into this brave, new world, it’s critical to ask a few questions. Like, why? Why is blockchain technology necessary? Is faster, cheaper, smarter always better when it comes to data? What unintended consequences will arise from introducing artificial intelligence into everyday life? You see, unlike the steam engine, the magic of these new technologies is that they can scale faster than ever and quickly engulf the entire world. And while they may have the power to unite and transform, they can also be used to bind and control. Advancements in technology over the next decade will certainly lead to massive job loss and many fear new forms of slavery, surveillance and crime. Now is the time for us to consider what we want and what we will allow. We can’t wait until these new technologies are fully developed. Once we push play, we can’t just rewind. If we don’t talk about it now, it will be too late. This course is a chance for us to consider these questions together. Through it we hope to explore the implications for us individually and collectively. We live in a world where distance is relative and resources are growing scarcer, where local problems now have global implications. Humanity may stand on the edge, but we stand on it together. So join us as we consider these tough questions and help shape our collective future. Course Outline
FinTech Ethics and Risks is a six-week, six-module course. Each weekly module compiles 5-7 sections, consisting of 15-20 learning units. In each learning unit, there is a short lecture video, followed by learning activities such as Quick Check questions, Polling, Word Cloud and Discussions. In addition, there is a range of additional resources provided, including research papers, news articles, industry reports, and useful links. There is a Conclusion Quiz at the end of each module.
Discussion is a very important part of your learning experience in this course. The course instructors will post questions and discussion prompts under each topic, and selectively comment on your responses. By the end of the course, active participants will also be invited to be discussion moderators and community TAs for the next course cohort.
Course Outline:
Module 1: The Ethics of Finance
This module will provide a historical and broad perspective of ethical issues relating to finance and the introduction or adoption of emerging technologies.
Module 2: Blockchain and Its Governance
This module will expand off the Introduction to FinTech course , to consider the most relevant and ethical ways such technology should be implemented, in a number of different industries or product segmentation. In particular, data collection, customer privacy, and transactional issues will be covered in this module.
Module 3: Cybersecurity and Crimes
FinTech can make it easier and cheaper for banks to monitor and control financial transactions, thus reducing fraud and reducing bank costs. But at the same time, these tools can be used to steal money and other corporate secrets, hide illegality (including purchases of weapons, drugs, etc.), and finance terrorists and other criminal organizations. Accordingly, this module will consider the implications of such important issues.
Module 4: Artificial Intelligence & Fintech
In this module we will consider the implications of building our own concepts of “human” morality into amoral machines, as well as a consideration of whether human biases and prejudices can or will be built into such mechanisms, whether purposefully or unintentionally.
Module 5: A Decentralized Future
One of the key reasons people are calling for FinTech is for its decentralized nature, thus democratizing finance, and allowing regular people to participate more fully and affordably in financial transactions through technologies like cryptocurrencies, non-government issued IDs, and P2P lending. In this module we will address some large questions, considering whether FinTech should lead to a decentralized, democratized system of finance. Or whether existing institutions will adopt FinTech strategies to cement their existing hold on the financial markets.
Module 6: Positive Impact of FinTech
In this final module, we will consider some of the many outstanding questions and purposes of introducing FinTech to the world, exploring the many ways that FinTech can both help and hurt society. We will discuss financial inclusion, sustainable development, and many other positive aspects of FinTech development. Conversely, we will also consider how these same technologies and solutions could potentially be used to inhibit access to financial markets, or worse.
Module 1 The Ethics of Finance 1.0 Course Introduction
Fintech Ethics and Risks. We are excited to embark on this learning journey with you, and we genuinely believe that the principles we will explore together are at the heart of one of the great debates that humanity will need to address in our lifetimes. Over the last few months, as we have prepared this course, this reality has become even clearer to us. Advances in technology, especially those related to financial technologies or “FinTech”, are already starting to impact us and will eventually become so pervasive that they will become a core part of our existence. Because of that, we felt compelled to teach this course, in order to collectively consider, with you, key principles and questions about the nature of how we want to manage technological change as it intersects with our lives. In developing new technologies, it is quite clear that perhaps the key focus is whether something can be developed or created, in essence captured by the question, “Can we do it?” This important question has been an engine that has driven human progress and technological advancement. There is, however, another equally critical question that is usually not asked, which is: “Should we do it?” This question is incredibly important because it forces us to consider the impact of new technologies at their genesis, and not when it’s too late or too difficult to mitigate negative aspects of the technology that were not initially considered. So at its core, this course is about considering the impact of new technologies, especially FinTech, before they are so mature and embedded that they cannot be managed. To kick-off our journey, we will first consider the history of finance and its role in society before moving on to an interesting case study about the financial institution, Wells Fargo. Then we will lay out five key principles that frame the course. These five principles are: trust, proximity, accountability, cultural lag, and privacy. And then we will return to each of these principles repeatedly through the rest of the modules. Lastly, while the nature of ethics sometimes requires an exploration of the dark realities of life, please don’t mistake that for our lack of enthusiasm about the future. If thoughtfully managed, we believe FinTech is a key to a utopian future where society is more fair, just, and inclusive. Thank you again for joining this journey. It’s important because we have a collective choice to make about that future. 1.1.1 What Is Money?
Before we begin exploring fintech in greater detail, let’s take a few minutes to consider the history of finance, and what purpose it plays in society. To do this we will consider three questions: One, what is money? Two, how do we value money? And three, why do we have banks? Answering these questions will help us understand the rise of fintech, and the moral underpinnings that make up the foundation of the industry. And for those of you in the finance space, please bear with us for a moment. This course is being taken by diverse people from all over the world, and is going to cover some pretty complicated ideas. We need everyone to have a clear foundation on some of the major principles so that we can move into some of the more advanced concepts. Now in reality, society is at a stage where we all should take a step back and review the nature of the finance industry. So whether you’re new to finance, or a savvy industry veteran, let’s revisit these foundational principles together. A few weeks ago while I was walking my 7-year old daughter Lola to school, she asked me a question that caught me completely off guard. She looked up to me as we were walking hand-in-hand and said: “Daddy, what is money?” I was confused by the question, and started mumbling something about bartering, and working and that we use money to represent value. But no matter what I said, she just kept repeating “that doesn’t make any sense, that doesn’t make any sense. Money is just paper and is not worth anything.” Well, what I failed to explain to her 7-year old mind is a key lesson of finance upon which much of society is built: that the value of money is a social construct built on trust. Let’s look at this another way: close your eyes and imagine you were just given a million dollars. And really, close your eyes – just trust me for a minute. Now, picture it. Really try to think about what you could buy for a million dollars. But now wait a minute – I didn’t tell you the currency of the million dollars. Think about how different that consideration would be if the currency was US dollars versus Hong Kong dollars, or some other type of dollar. As you probably know, the value of money fluctuates based on the relative value of the currency. And this calculation also changes depending on the time: so a currency may be more or less valuable today than it was yesterday. This has been starkly evident when observing the massive fluctuations of cryptocurrencies like Bitcoin over the past few years. Okay, one last consideration: close your eyes again and envision what you can buy with US $1 million dollars. Try to picture it. You could buy a nice home, a fancy sports car, or finance a trip around the world several times over. Now, picture that pile of cash and what it would look like. Maybe even consider throwing it out over a bed and just rolling around in it for a while. Now imagine you are stuck on a deserted island. You are starving, thirsty, maybe scared. You have that same pile of money – but what can it buy you now? Are you going to be able to negotiate with the apes for some of their bananas with that money? In that context, the dollars might be more valuable as kindling to help you start a fire! Or imagine if a small boat pulled up to the island with the ability to rescue you, but the price of your rescue was the entire one million dollars. Would you pay it? Okay, so what’s the point? We share these stories because before we get too far in this course, we need you to understand a couple of things. The first thing is that value is subjective. And when we say value, we are referring to both material value – which is the value that we ascribe to goods and services – but also the value that we place on morality and personal connections. The second thing that we need you to understand is that the very concept of money is largely a social construct, something we have invented as a medium of exchange and to which we have prescribed a specific value. As my daughter Lola noticed at 7 years old, in a vacuum money by itself isn’t really worth anything. And when stuck alone on an island, your banknotes carry little value. So if currency by itself is essentially without value, then why is it so important and coveted so highly? To understand that, we have to go back a few centuries. 1.1.2 How Do We Value Money?
So if currency by itself is essentially without value, then why is it so important and coveted so highly? Let’s do another thought experiment to explore the answer. This time imagine you live in a small European town maybe 250 years ago. You can select your occupation – maybe a blacksmith or cobbler, or, more commonly at the time, a farmer. Back then most economic enterprises were small family enterprises and everyone knew each other intimately, and a lot of transactions were based on a barter system. So if you raised chickens, you could trade your eggs for whatever else you needed. For example, if you wanted rice, you would need to find a rice farmer who had spare rice to sell – and who wanted eggs – and then agree on how much rice an egg would get you. Back then, most transactions were proximate, meaning they were directly between two people meeting in person. In that type of proximate, one-on-one scenario where you both live in the same small town, deceptive sales practices are far less likely because merchants relied on their good name. As you can imagine, it would be pretty awkward if you cheated someone since you would have to continue running into each other in your small town. As a result, this type of personal connection to the community significantly increased the trust within the marketplace. So as you probably can understand, this type of proximate, one-on-one barter system is fairly limited. If you were a teacher, for example, and you are providing education, then how many eggs is an hour of education worth? So over time money was introduced as a common medium of exchange, making trade easier and giving rise to service industries and other knowledge-based professions. But here’s the challenge: let’s say you start to receive currency for your eggs rather than bartering for rice or other goods. How can you ensure the value of that currency? Imagine if for your entire life you’ve been bartering for goods, and now instead someone wants to hand you a piece of paper and say that it is the equivalent value of that particular good. As we discussed earlier, in a vacuum the currency in your wallet has little inherent value. Its value exists simply because we have decided that it does – and society has decided to use it as a medium of exchange. As my daughter Lola said, that really doesn’t make any sense! Well actually it does, but only when based on a very broad social construct founded on trust. We trust that the money we hold in our hands today will have some meaningful value tomorrow. Even in today’s much more complicated marketplace, trust remains the basis of our monetary system. Our money today is not backed by a physical commodity like gold. It only has value because governments have declared it to be a legal tender, which is what we often call fiat money, and people believe or trust that such status will continue. If you remove trust from the financial system, then the entire thing crumbles, as we have seen happen during financial crises around the world like the one that is sadly occurring in Venezuela right now. Additional Readings 1.1.3 Why Do We Have Banks?
Okay, so we’ve done some interesting thought experiments related to money and how we value money, and particularly about how our financial system is based on trust. But then what is the “financial system”? What does that even really mean? Finance and the financial system largely refer to the services related to the management of money. And, as mentioned earlier, this often requires a relationship of deep trust, or what we would call today a “fiduciary relationship.” And if our trust in money is largely a social construct, our trust in the financial industry is largely an economic and legal construct. In other words, we rely on contracts and the law to enforce our rights, rather than intimate social relationships as was common back in Europe 250 years ago. Although finance is made of many types of institutions, banks have always been at the heart of the industry. So let’s take a minute and explore the traditional purpose of banks. Why do we even have them? When I think about a bank, the first thing that comes to mind is a physical location where people go to deposit money and withdraw money. Or to put it really simply, it’s the place you stick your money to keep it safe. But this is only a part of what banks are for. During the industrial revolution, the traditional feudal system broke down and new industries started popping up all over the world. This led people to start moving away from farming jobs and into manufacturing and service roles, which for many families meant that they had discretionary income for the first time. As a result, if they didn’t want to hide it under their mattress, they needed a safe place to keep the money. And with the rise of entrepreneurship and companies during that era, it also meant that many people sought loans for starting businesses, buying homes, and other consumer necessities. As a result, the financial industry really started to thrive, and banks popped up all over the place. These banks served four primary functions, which really haven’t changed that much even until today. First of all, banks give people a way to save money safely. This makes sense – I’m sure you’ve seen a TV show or movie that included a bank heist where criminals broke into a vault. The vaults have huge doors, thick walls, complex security systems, and most importantly – lots and lots of cash, gold, and other valuables. And although some of this has changed, especially with a lot of currency and banking have become cloud-based and hosted on servers rather than in vaults, security is still the number one reason why so people use banks to hold their money. The second traditional function of banks revolves around financing. As economies started changing, people started exploring new uses for credit and capital, such as what many people carry around in their pocket – the credit card. Now this is a form of financing. The expansion of consumer credit has been the key driving force in enabling many people to move from low-income to middle class around the world, and traditionally banks have been the best source of consumer credit. The third traditional function of banks is to facilitate investments. So, without getting too complicated, let’s use a simple example. Let’s say one day you receive your paycheck – and after paying all your bills you have some money left over. And you decide that you want to save that money. But instead of saving that money in your bank account, you say: “hey, I want to buy a mutual fund, or I want to invest in the stock market by purchasing shares in a company that I like.” Banks are at the core of that type of investment activity, and that is an important role they play in society. The fourth and final traditional function of banks revolves around providing financial advice – and often helping companies or individuals make the best use of the money they have at their disposal. Because of these four reasons, banks have been trusted community partners for centuries, and one of the key reasons for the rise of the middle class throughout the world. But are things starting to change? Additional Readings 1.1.4 The Loss of Trust in Financial Institutions and the Rise of TechFins
As we mentioned, finance is largely built on trust. And in the past, banks served as the guarantors of trust in the financial world. But trust in financial institutions has diminished pretty significantly in many countries over the past decade. This of course is largely due to the Global Financial Crisis, which affected millions of people around the world. I remember that time vividly. Back then I was still practicing law full time in Hong Kong, one of the world’s financial centers. So most of my friends, colleagues, and clients were deeply affected by the crisis. Even so, we were only able to watch as the near collapse of the global financial system occurred. David Lee and I, we watched many of our friends as they were terminated from jobs with little warning. We had a daily reminder of how flippantly certain members of the global financial community pursued profits at the expense of their customers, and raising concerns that government regulators were not adequately supervising the financial industry. The crisis and its aftermath highlighted how the financial system and banks failed to perform some of the chief roles they were meant to perform for our society, particularly in managing risk and allocating capital. Millions of people around the world lost their homes, their savings, essentially their futures. In the US alone, it was estimated that American households lost $20 trillion dollars in wealth as a result of the Financial Crisis. And as a result, it might not surprise you that many people began to distrust the very institutions that were meant to protect and serve them. And the age-old characterization of bankers as greedy, selfish, short-sighted, bloodsuckers returned in full force. Let’s be honest: many large financial institutions have not done much since the Financial Crisis to reduce our concerns, with multiple high-profile scandals only helping to hasten the rise of FinTech innovations outside of the traditional financial sector. Over the past ten years, due in large part to a combination of the Financial Crisis and the advent of the smartphone, a major shift has occurred, characterized by the rise of what we call the TechFins – digital platforms like Facebook, Amazon, Google, and Tencent – that provide e-commerce, peer to peer lending, communications, and increasingly serve as the keepers of our digital identity. But after more than a decade of explosive growth, many of the TechFins are themselves embroiled in controversy, once again leaving customers wondering who they can trust. Data privacy breaches and little accountability have caused many people to question their use of these large technology platforms. But the fact remains: people still need financial services. So who will step up as the trusted partners of the future? Of our future? Let’s consider that question as we dive into our first case study. Additional Readings Buckley, R. (2016). The Changing Nature of Banking and Why It Matters. In R. Buckley, E. Avgouleas, & D. Arner (Eds.), Reconceptualising Global Finance and its Regulation (pp. 9-27). Cambridge: Cambridge University Press. doi:10.1017/CBO9781316181553.002 1.2.1 Case Study – Wells Fargo
Banks have used the past decade since the financial crisis to rehabilitate their image, some more successful than others. But one bank has recently gone above and beyond in reigniting the general public’s disdain towards financial institutions. If banks are built on the foundation of consumer trust, Wells Fargo has systematically dismantled that trust leading to an uncertain future for that institution. Wells Fargo has a really interesting history. It was established in 1852 in San Francisco during the gold rush, and as a result has long been an integral part of the American financial landscape. When gold was discovered in California, Wells and Fargo – two entrepreneurs – decided to provide services relating to the transport and safe keeping of gold dust, gold coins, salaries, and other critical resources all across the US Western frontier. You may have seen before, but the stagecoach is actually the logo or symbol for Wells Fargo bank. Before the advent of railroads, stagecoaches were considered the safest and most reliable form of transportation for people and valuables across the dangerous deserts of the Southwest United States. This was the age of the American cowboy, and those stagecoaches were the targets of some of the most notorious bandits of the time. You have probably seen movies with this type of a scene – where a stagecoach driver and a guard sit on a seat. They usually carried sawed-off shotguns and revolvers, and often had to fight their way past bandits in the rugged terrain. Anyway, this is important because once again, the crux of the entire business model was based off of trust. Trust that the Wells Fargo coach drivers wouldn’t steal the gold dust and bars they were carrying. Trust that the stage coaches and the roads built would provide reliable transit to ensure payment of railroad employees. Wells Fargo was so trusted by the railroad tycoons that it quickly established the largest fleet of stage coaches in the world, helping to build one of the oldest and largest banks in the United States, eventually employing more than 200,000 people globally. In September 2016, news emerged that employees at Wells Fargo, the world’s most valuable bank in the world at the time, had created millions of fake bank and credit accounts that customers had never authorized. Due to a high-pressure sales culture and an incentive-compensation program for employees to create new accounts, Wells Fargo employees had engaged in an array of immoral practices, such as fraudulently opening accounts, issuing ATM cards and assigning PIN numbers, faking signatures and using false email addresses. Customers had subsequently been hit with late fees, overdraft charges, annual fees, and other costs – all of which could affect their credit scores. When customers noticed the charges, employees would apologize and lie, saying oh, there just had been an administrative mistake. This dishonest program was based on the internal goal of selling at least eight financial products to each customer, or what Wells Fargo called the “Gr-eight initiative.” These products included credit cards, savings accounts, investment accounts and more. Why eight you may ask? Because eight rhymed with great! No joke, that’s what they decided: the CEO said “because eight rhymes with great,” therefore they arbitrarily decided that each customer should have 8 accounts with the bank. Selling different accounts to bank clients is commonly known as cross-selling. Basically, if you go to a bank and open a savings account, they might ask you to open a checking account, or buy an insurance plan. This is called cross-selling, and they wanted the average Wells Fargo customer to have 8 such accounts. Why? Well, in part because it allowed the bank to make more money in fees. But to be honest, the fees were minimal and Wells Fargo didn’t really make much money of off them. So then why would they do it? Why did the bank put so much pressure on their staff to cross-sell and push 8 accounts that managers across the bank started creating fake accounts? The reason is because Wall Street analysts used data like “new accounts opened” as a key metric when evaluating a bank’s share price. That means, the more customer accounts Wells Fargo can show, the higher their stock price went, even if Wells Fargo really wasn’t making any additional money. And when analysts saw all the new customer accounts, the share price for Wells Fargo doubled between 2012 and 2015. And who makes money when the share price goes up? Well shareholders will, but in particular the executives and directors of the company who are compensated primarily in stock options. So, in other words, even though Wells Fargo wasn’t making more money, or serving its customers better, the value of the shares doubled making a lot of money for the bank’s executives – the very people who created this horrible practice in the first place. The high-pressure sales culture created by Wells Fargo bank executives, where you could face getting fired if not hitting the cross-selling goals, created a toxic environment that pushed employees to fear for their jobs and make bad ethical choices, all while management turned a blind eye to the practice. The program finally became public years after Wells Fargo’s management knew about the problem. When asked why he didn’t notify government officials as soon as he learned about the problem, then-CEO John Stumpf said, that the amount of money made by Wells Fargo from the program was immaterial to the bank’s size – and thus not important. Of course, this incensed the public and lawmakers alike, and they demanded action. So what did Wells Fargo do? Well, they didn’t replace any of their senior management. Instead, they terminated nearly 5,300 mid-level employees, stating it was their fault for making up all the fake accounts. Not a single top level executive was fired at that time. Once again, this did not seem sufficient to the public and US lawmakers. US senators grilled Wells Fargo’s top management, and the media carried story after story detailing the bank’s actions – or perhaps more accurately, inaction. After mounting pressure, then CEO John Stumpf stepped down, as did did Carrie Tolstedt, the head of the community banking division at Wells Fargo. But don’t feel too bad for either of them. For example, when Ms. Tolstedt left Wells Fargo she received about $125 million USD equity compensation as a retirement package. All in all, Wells Fargo had engineered what one analyst described as a “virtual fee-generating machine, through which its customers were harmed, its employees were blamed, and Wells Fargo [and it’s executives] reaped the profits.” In light of the scandal, Wells Fargo and its new CEO, Tim Sloan – who was the bank’s former COO – emphasized that they would initiate refunds “as part of [their] ongoing efforts to rebuild trust. But Wells Fargo’s problems didn’t end there. Their unethical internal culture had permeated several of their businesses, leading to a string of scandals and investigations. For example: In July 2017, Wells Fargo admitted to forcing up to 570,000 borrowers into unneeded auto insurance. Reports also emerged that 110,000 customers had been incorrectly charged “mortgage rate lock extension fees” between September 2013 and February 2017. And last year news also emerged that a computer glitch at Wells Fargo caused hundreds of people to have their homes foreclosed on between 2010 and 2015. As a consequence of these numerous scandals, the Federal Reserve announced on February 2, 2018 that Wells Fargo would not be allowed to grow its assets until it cleared up its act. An unprecedented punishment. In May 2018, Wells Fargo launched a marketing campaign to emphasize the company’s commitment to re-establishing trust with its stakeholders. The commercial opens with the Old West origins of the bank, depicting its transition from horse riding, the iconic stagecoach, the steam boat, the train, its branches, its ATMs, now and its mobile systems – portraying its whole technological journey. The video then goes on to make references to the scandals, and illustrating how it is now a “new day at Wells Fargo.” That new day and attempt to re-establishing trust, may have been another attempt in vain. Because, just a few months after, in August 2018, the Justice Department of the US government announced that Wells Fargo had agreed to pay a $2.1 billion fine for issuing mortgage loans it knew contained incorrect income information. The government said the loans contributed to the 2008 financial crisis that crippled the global economy. If trust is a key component for the financial system and banks, what does the experience of Wells Fargo tell us about the financial system today? Do you feel like the Wells Fargo example is an outlier, and that most of the financial industry today can be trusted? Or, are you skeptical about the ethics of the broader industry as a whole? Additional Readings 1.2.2 Case Study – Wells Fargo: Breach of Trust
Okay, this is a crazy case that a lot of people in the financial industry were really, really concerned about. So why is this case so important? I mean, there seems to be a lot of financial crime out there, people not doing great things all the time – what made this particularly special? Yeah, it’s a good question because, again, the actual money that Wells Fargo made from this really wasn’t a lot, so in terms of financial crime it wasn’t that significant – and yet a lot of people were really upset about this. Some financial analyst even said that this was the worst financial crime ever. And I think the main reason is because, you know, for you out there, for me, I choose a bank solely because I need to know that I can trust them. Right. And here in this particular instance, they completely betrayed that trust and seemingly for completely selfish and greedy reasons. So, when you say selfish and greedy reasons. What do you mean by that? Well, again, there really was no benefit to the customer here. So again, when you open a bank account and you put some money there, you’re not anticipating that they are going to do all these shady things behind your bank; they are going to open up accounts, or make you get insurance, that you know nothing about. And in this particular instance, I feel like, it was just complete dishonest and betrayal of trust where there was no benefit to the consumers whatsoever. So they didn’t for example, they didn’t do any research and say customers are better off if they have 8 accounts, they simply said that 8 rhymes with great, and so therefore we’re gonna do this. Okay, so then who did benefit from this kind of activity? The senior staff, the CEO, various high-level people within the company, specifically those that had stock options for example. Because, again, even though, it’s very unique right, because the bank didn’t make very much money off the unethical behavior directly, the reason they made money is because their share price doubled within a short period of time, so they were able to sell off their shares and personally benefit significantly from this, but the bank itself didn’t actually receive a lot of remuneration. That’s interesting, so you’re saying that, from an economic perspective, the bank did not make any money from this? But somehow these extra accounts they created, increased the share price, because Wall Street analysts saw this as some sort of metric that the bank was growing. [Yeah, exactly.] And so, in terms of market value, it seems that it was increasing, but in terms of actual economic value there was actually no real value that was added by this behavior. So, basically the explanation is like this: the bank itself, when it does transactions, they make money out off it, just like you’d make money if you sell hamburgers or whatever. And the bank from these kind of unethical, even illegal, behavior, only made they think between 1.5 and maybe 2.5 million dollars from these transactions. But here’s the thing, their share price more than doubled, which means that the individuals that owned those shares including the CEO and various senior officials who were pushing this behavior, they made hundreds of millions of dollars collectively and they walked away with almost all of that. Now, there was some clawbacks, there were some issues where they had to give up some of that money, but again, they walked away although in disgrace, they walked away with 100s of millions of dollars. And how much were roughly the fines, and things that Wells Fargo had to pay because of this kind of behavior? Yeah, again, this is the terrible thing. Again, if you’re the customer of a bank, and you think that you want the bank to be led by people with integrity because you want insure that your investment is safe, here’s the rub: they individually made 100s of millions of dollars, and then when they left the bank in disgrace, the bank ended up paying 100s of millions of dollars in various fines and legal fees – potentially over a billion dollars more recently – where they are going to have to pay these massive fines. And that doesn’t even include the reputation loss, and so municipal governments, state governments, that completely removed their business from Wells Fargo, which means it made it impossible for them to continue growing – or, not impossible – but it’s certainly hurting their bottom line. And it was so bad that the federal government in the US actually kind of stopped their growth: saying, you gotta clean this stuff up because you’re not running this in a reputable way. So, seems like there is a tragic irony here that the people who at least allowed that behavior to occur, or at least on their watch, they were able to benefit from it and walk away, and the bank and the fines it has to pay, are really being borne by the current shareholders and the current other stakeholders, such as customers and employees – that have to deal with the fallout of all this. Yeah, and that includes all of you by the way. So, think about it, if you’re gonna use a bank, if you’re gonna use them for services – how would you feel if you knew that they betrayed your trust in that way. How do you kind of move on from that? Additional Readings 1.3.1 Key Ethics Principle – Trust
After learning about the Wells Fargo case, what were some of the underlying thoughts that you had about the case? Did the actions of the bank leaders surprise you? And would you trust Wells Fargo as your bank after learning what they did? You might be surprised to learn that some financial analysts said this was the worst financial scandal of all time, primarily because Wells Fargo acted so completely contrary to the interests of its customers. What do you think? When studying ethics, it is often helpful to use examples like Wells Fargo and other cases to consider possible outcomes and actions in real life ways. Throughout the course we will share cases like this in part to help you learn specific principles, but also to help you to create value judgments for your own life. To help you create a moral code, so to speak. By so doing, we hope that you will come to a clearer definition of personal ethics in your own life and career. And while there are many different ethical concepts that we could discuss throughout the course, we are primarily going to focus on five key ethics principles. Those five key ethics principles are: trust, proximity, accountability, cultural lag, and privacy. Some of these concepts, like trust and accountability, will be really familiar and easy to understand. But some of the others, especially proximity and cultural lag, might take some additional study. And please also keep in mind, even though the basic premise of some concepts might be familiar and easy to understand, the challenge is to extrapolate out and consider how those concepts are going to affect us as technologies change in the future. For example, while we all understand the basic meaning of the term “privacy,” how do you think that concept will adapt and change with the advent of AI and facial recognition software? In this class we will ask you to look into the future a bit and try to predict what likely but unexpected consequences will result, whether good or bad. Okay, so let’s get started with the first key ethics principle: trust. We already mentioned trust a lot in this module, and this is probably the simplest concept to understand. For example, it doesn’t take a finance or law degree to understand that the deceptive practices of Wells Fargo and its staff was incredibly unethical, and likely criminal. So we are not going to dwell too much on the concept of trust now. But it is worth repeating yet again that the entire financial system is built on trust, and therefore the bulk of criminal financial law punish any breach of trust, or what we professionally call “fiduciary” obligations. And as a side note, for those of you who are familiar with the term “fiduciary,” it might interest you to know that the Latin root for the word literally means “one who holds something in trust.” Whether it was 250 years ago in a small European village where everyone knew each other, or in the much more complicated global marketplace that we have today, we have to understand that without a certain level of trust, the entire economic system comes crumbling down. Both traditional financial players and new fintech innovators must keep this in mind, and ensure that their products and services continue to enhance trust. In fact, because financial institutions play such an important role in society, and since most people are so clueless about complex financial products, most countries actually have disclosure requirements, meaning that banks have to be truthful and transparent with their customers, making sure they understand the nature of what they are buying or investing in. If banks are not forthright about material information, they can have significant penalties, including fines and possibly jail time. In other words, financial institutions have a higher level of trust placed on them by society, so therefore they have higher penalties if they breach that trust. As a result, one of the major considerations relating to FinTech revolves around the need to ensure that all fintech applications and innovations enhance social and consumer trust, rather than diminish it. It would be unethical, for example, for unsafe or unclear financial products to be introduced into the market via a new FinTech app. Unfortunately, some early iterations of fintech have only caused the public to question the ethical use of these technologies. For example, the use of cryptocurrency to facilitate crimes has caused many people alarm. We need to address these concerns right from the beginning and ensure that tech innovators and finance professionals consider not only the bottom line, but also the importance of maintaining balance and trust in society. 1.3.2 Key Ethics Principle – Proximity
The second core ethics principle that we will be discussing throughout the course concerns the concept of proximity. In psychology, the concept of “proximity” is a key variable in explaining behavior in many circumstances. Proximity denotes both how physically close or emotionally close we are to someone or something. And differences in proximity can lead to varied outcomes. One story that demonstrates the impact of proximity, is the classic trolley problem. You may recall a teacher explaining it to you when you were younger. If this doesn’t ring a bell, don’t worry, we’ll do a quick recap. The typical version of the trolley problem usually compares two scenarios where there is a runaway trolley about to hit a group of five people. In the first scenario, you have the choice to divert the trolley with a switch, pulling a lever which would change the trolley’s direction and kill one person instead of the group of five. In the second scenario, instead of a switch, you are required to physically push a person in front of the trolley to stop it – thus saving the group of five – but killing the person you pushed. Both actions lead to a similar outcome, and yet the way that our brains process the situations is completely different. The trolley problem has been reviewed and studied many times, and each case, nearly everyone opts to divert the trolley using the switch, and nearly all object to pushing a person into its path. This dichotomy highlights the importance of proximity in people’s decision-making. If an action is proximate, physically or emotionally, then we often rely on the “moral” center of our brain to consider the dilemma. That is represented by the fact that almost everyone chooses to not push the man on to the tracks directly. Conversely, if an action is non-proximate in nature, meaning the action and its outcome are separated even slightly, then we often rely on the “logic,” or cost-benefit center of our brain to consider the dilemma. That is represented by the fact that nearly everyone opts to pull the lever, even though the action leads to nearly the same outcome as pushing the man. Now this is very important because our world is increasingly distant and non-proximate in nature, resulting in our leaders increasingly using amoral, cost benefit analysis when making decisions that can affect broad sectors of society. Let’s recall the Wells Fargo example we just discussed. If you compare Wells Fargo, a large, international bank, to perhaps a bank in a small town, the role of proximity is pretty clear. Psychologically speaking, it’s generally much harder to cheat people we are proximate to, people we interact with on a daily basis, compared to a customer that is just a number, one person that is part of a large mass. Accordingly, the concept of proximity applies to FinTech also. One great outcome of FinTech is that it will provide financial access to a greater number of people, those that are unbanked or underbanked. At the same time though, this technology will probably require less human interaction, meaning less proximity as well. So does that mean as proximity declines, people will lean towards cheating each other more? Who knows, but what is clear is that we want new innovations to bring us closer together and not drive us further apart. Additional Readings Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI Investigation of Emotional Engagement in Moral Judgment. Science , 293(5537), 2105-2108. Retrieved from https://science.sciencemag.org/content/293/5537/2105 1.3.3 Key Ethics Principle – Accountability
The third key ethics principle that we will discuss throughout the course is accountability. Accountability is really a subset of governance and regulation, and is essentially a question about fairness and who is responsible when things go wrong. Many of the governance structures that we rely on in society try to make it clear who is accountable when a problem arises. But as you will see throughout the course, as the world gets less and less proximate, it is simultaneously getting harder to determine who should be held accountable for certain injuries. And FinTech innovations may be making all of this even harder. Consider the Wells Fargo case we discussed earlier, were the people responsible for violating customer trust actually held accountable? As mentioned, the bank’s initial reaction was to terminate 5,300 mid-level managers for their involvement in the program. But what about the leaders who created and pushed the program? It seems pretty clear in that case there was an accountability gap. This question of accountability is also relevant for technology. Consider a social media platform that you frequently use, say Facebook, Twitter, YouTube or their equivalents in your country. If there is inaccurate or even harmful content posted there, who is accountable for that? Surely, we would say the individual that created and posted it. But should the technology platform itself that’s hosting the content also be responsible? This is an important question and in the wake of fake news and some really tragic incidents, there is understandably a lot of debate about who should be accountable. In some countries like Singapore, we may have an initial answer. Singapore is planning to implement a new law that will require online media outlets to issue warnings, possibly correct, and in some situations even force companies to take down content that is false. Prior to this, such platforms could act at their own discretion to close accounts or limit false information. Perhaps, not anymore. The United Kingdom may eventually go even further in their efforts to regulate the internet through a recently proposed law that would make technology companies more legally liable for the content they host through fines, penalties, and direct litigation. Areas that the possible new law would cover include content that supports violence, terrorism, promotes suicide, spreads false information, and even cyberbullying. So when considering accountability for technology companies, including FinTech firms, it seems that society may no longer be satisfied with attempts at self-regulation, which then raises the broad question of how large technology companies should be regulated. Additionally, should TechFins be regulated and treated differently than large financial firms? What should be the standard and should that standard be global? 1.3.4.1 Key Ethics Principle – Cultural Lag
The fourth key ethics principle that we will discuss throughout the course is cultural lag, which is the idea that it takes time for culture to catch up with technological innovations, and that social problems and conflicts are caused by this lag. Till now we have talked mostly about finance, but FinTech is not only about finance; that’s the Fin, but there’s also the Tech, the technology. And cultural lag considers the best way to ethically introduce new technologies into the marketplace. Technological innovations are often characterized by one word: Disruption. If you pay attention to Silicon Valley, it seems like someone is talking about disruption every few minutes. “We’re going to disrupt this industry.” Or “This innovation is built for disruption.” And while not everything out of Silicon Valley is really “disruptive,” many amazing disruptions and innovations have propelled humankind. And the pace of disruption seems to be increasing. Humankind has progressed more technologically in the past 200 years than the previous 20,000 years combined. But is disruption always good? And even when the overall impact is positive, are there ethical issues that should be considered when introducing innovative disruptions? The answer obviously is yes, but we seldom talk or think about these ethical questions until after the technology has been introduced, which is often too late. As mentioned in the Introduction to Fintech course, as human beings, we tend to overestimate the effect of technology in the short run and underestimate the effect in the long run. This seems obvious, right? For example, just think about the far-reaching impacts that smartphones have had since their introduction. Can you believe that smartphones were first introduced only around 10 years ago? I guess for some of you younger students, that might not seem like a long time ago. But for a lot of us, that seems like only yesterday. Either way, the point is that it has only been 10 years, but think about how much of an impact smartphones have had! Pretty much everyone has them, and that includes a large swath of the developing world. And many of the most amazing FinTech innovations are only possible because of the smartphones that all of us are carrying around today. But here’s the thing: smartphones became popular so quickly that we, as a society, didn’t really have time to understand the implications of the technology on our broader culture. And every time we started to adapt and adjust to the technology, tech innovators would adjust and make some new feature to stay ahead of our adjustment period. These are all examples of cultural lag, and show that technology is able to change more quickly than society can culturally adapt to such innovations. And there’s one important aspect of cultural lag theory that we need to understand: sociologists and economists believe that many of society’s most challenging problems are often caused by cultural lag. Again, think about smartphones. Experts in many disciplines are now emphasizing that smartphones are actually creating or reinforcing serious social problems. We have all heard reports that emphasize that we spend too much time looking at our smartphones focusing on social media to the exclusion of our actual social circle. After a decade of not really understanding the implications of these habits, people are now working to reduce their screen time, and many technology firms like Apple and Google have introduced products to help track and even lessen screen time, encouraging users to spend less time on their phones. There are many more serious examples highlighting the gap between changes in technology, which occur very quickly, and subsequent adaptations in our culture, which happen very slowly. And the smartphone example is only a very simple example. The reality is that some of the biggest problems society faces – things that are so big that we sometimes have trouble seeing or understanding them – are often tied to technological disruption and the cultural lag that stems from them. And while these massive innovations are rightfully celebrated for their positive impact, it’s worth considering some correlated points. For instance, what happens to all the people who work in industries that are made obsolete because of the new technologies? Certainly a lot of people have benefited from technological innovations, but not everyone does. Or at least, maybe people don’t benefit equally. Is it morally necessary for new technologies to benefit all of society? And even if that is possible, should that be an overall goal? Should that be a normative aspiration of new technological innovations? Let’s consider another example: drones. Do you have one? Or do you know someone who does? They are now pretty popular, and became so popular so quickly over the last few years that governments were caught off guard without regulations specifically covering private drone use. And there are some scary aspects of drone use that people may not have considered previously. For example, people have weaponized drones, with one drone even being used to attempt an assassination of a state leader. And while some companies are using drones in Africa to deliver blood for transfusions, there are also people using drones to drop contraband into jails and prisons, or to smuggle drugs across borders. When considering cultural lag, laws are some of the slowest changing aspects of culture. It can easily take years for even simple laws to be enacted. As a result, when drone technology rapidly advanced, making them affordable for almost anyone, governments raced to catch up, creating regulations to help balance public safety with personal recreation. As is probably clear, it’s hard to hold someone accountable for improper drone use if there is no law defining proper drone use. Thus, the cultural lag created between the rapid advancement of drone technology and the much slower development of drone-related laws has created some serious concerns, including disruptions of airports, concerns about privacy and use of drone cameras around personal residences, military installations, and other sensitive locations. So when new technologies are introduced, and these gaps or lags are created, who should be responsible for the negative consequences? The innovators? And the inventors? The government? The users? Governments around the world have been grappling with questions like these for a long time, and some disruptive FinTech innovations are going to pose very significant challenges for regulators – and some already do. Additional Readings Ogburn, W. F. (1957). Cultural Lag as Theory. Sociology & Social Research , 41(3), 167-174. Marshall, K. P. (1999). Has Technology Introduced New Ethical Problems? Journal of Business Ethics , 19(1), 81-90. Retrieved from https://www.jstor.org/stable/25074076?seq=1#metadata_info_tab_contents Brinkman, R. L., & Brinkman, J. E. (1997). Cultural lag: Conception and Theory. International Journal of Social Economics , 24(6), 609-627. Retrieved from https://www.emeraldinsight.com/doi/abs/10.1108/03068299710179026 1.3.4.2 Productivity Shifts and Technological Revolutions
Okay, if you are watching this course, chances are that you work in some type of service industry like finance, law or accounting. If so, what is the difference between your chosen career, versus, let’s say, a farmer or some other type of blue-collar worker? For a lot of us, we choose our careers based on security – industries that we think are safe – but here’s the reality: you are a lot more like a farmer than you may realize. So now you understand cultural lag and both the challenges and moral implications that come from introducing disruptive technologies. From a FinTech perspective, there are some exciting disruptions that are right around the corner. And while these innovations will make many aspects of life much easier, there is one major challenge that we feel like we need to address: the social ramifications from unemployment and job loss. Okay, to get this point across, we need to go back in time again. Early communities of humans congregated around each other in villages for specific reasons. Obviously, protection and socialization were among those reasons, but there was one overarching activity that held early societies together: food. Early human communities revolved around agriculture, and many of the most important early innovations revolved around the growing, harvesting, and storage of food. During the Bronze and Iron Ages, stone and wooden tools were replaced by more efficient metal tools, but main processes in agriculture remained largely unchanged for thousands of years. But that changed quickly during the Industrial Revolution. In many parts of the world, during the Industrial Revolution, horse-drawn and even mechanized harvesting equipment were introduced leading to a vast increase in productivity. This not only sped up the time in which crops could be planted and harvested, but it also significantly increased crop yields. During that time, the number of people working in agriculture dropped, but the amount of land that could productively be used to grow crops grew substantially. This led to fewer but larger farms. To put it simply, fewer people were needed to farm, but simultaneously more food was grown. In fact, some historians contend that these improvements in agriculture “permitted” the Industrial Revolution because the increase in food production and decreased need for farm labor meant that more people could work in urban industries, providing labor for factories, large urban utility projects, and really all the innovation that led to the rise of the 20th century. But there were a few obvious problems that stemmed from this. First, these advances didn’t occur everywhere. For example, many of the countries that are still developing today did not participate equally in these advancements for a variety of reasons, and as a result, their economic progress was delayed. And even in developed countries where these advancements were adopted broadly, the benefits were not distributed equally. But there was another more serious issue. While the machines and tools that were introduced improved productivity, they also made many jobs redundant, leaving millions out of a job, and needing to transition to an entirely new industry. In the United States alone, agricultural jobs transitioned from 40% of the workforce in 1900 to only 2% in 2000. That’s only 100 years! And while that may seem like a really long time, in terms of human history that is incredibly short. So where did all the old farmers go? Well, the lucky ones were able to find even better jobs in manufacturing, or even in services related to the agriculture industry, like logistics, storage, or marketing. The point is that they had to reinvent themselves, and rethink the way they defined work. For many people, this was a great opportunity, but of course, others got left behind in the process. Now consider today. We now have unmanned drones that can plant seeds, spray and monitor the health of crops, and even harvest them. And artificial intelligence and machine learning are being integrated to help farmers make better decisions and monitor growth in real-time. As a result of all this, less and less human labor is needed to run mega farms. But for most developed economies, these changes started more than a century ago. But what happens when these modern technologies are integrated into developing countries today? Well, let’s look at an example. From 1990 to 2017 – just 27 years – It is estimated that in China agriculture employment went from about 55% of China’s population to about 17%. That’s a difference of several hundred million jobs. And while China has done a good job of expanding its economy and transitioning those workers to the manufacturing sector, you can see how difficult it can be when productivity shifts make workers obsolete. In fact, speaking of manufacturing, this is also happening in that sector as well. Throughout the US and EU, manufacturing jobs have been slashed through a combination of innovation and automation. Many of the workers who lost their employment have yet to be fully reintegrated back into the workforce, leading to significant political pressure and social anxiety. One question that bears asking is who is responsible for ensuring these new innovative technologies are integrated in such a way that social harm is minimized? Is that the role of the innovator? Is that the role of the government? Or someone else? Okay, so why does any of this matter? And what does it have to do with FinTech? Well, if predictions can be believed, we are about to enter the Fourth Industrial Revolution, which could bring the most significant disruption and productivity shifts humankind has ever seen. Artificial intelligence, blockchain, other new technologies will completely alter not only how we work, but our entire perception of what work is. Let’s use 2 concrete examples: cashiers and drivers. Cashiers – the people you pay when you leave the store – and drivers. Well, these jobs are the two most common jobs in the United States, and it’s the case in many other developed countries as well. Well, millions of those jobs are likely to be eliminated within the next 10 years as automation, driverless vehicles, and other FinTech innovations make them obsolete. In fact, it has been estimated that 38% of current US jobs are at high risk of being made redundant by robots and automation in the next 15 years – that represents about 60 million jobs, or 1/5th of the entire population of the United States. And although some new jobs will be invented that many of these workers will be able to take up, unlike previous productivity shifts, these newer innovations are largely replacing human workers completely making it difficult for the unemployed to simply shift back into new work. For example, for a farmer to go work in a factory, new skills are usually not required. But for a cashier or truck driver to become a computer programmer or robotics engineer, an entire new skill-set requiring years of schooling and training would be required. So now let me turn the question back on you: what are you doing to ensure that your profession and your career, you’re ensuring that you don’t become redundant or that you can stay ahead of these new emerging technologies? Additional Readings 1.3.5 Key Ethics Principle – Privacy
The fifth and final key ethics principle that we will discuss throughout the course is privacy. The debates waging around the world around the concept of privacy are the amalgamation of all the concepts we will cover in this course, including: trust, proximity, accountability, and cultural lag. Privacy is one of the key issues of our time, and is something that we need to start thinking much more deeply about. Among all their other problems already discussed, Wells Fargo also experienced privacy and data breaches. For example, in 2017 Wells Fargo accidentally sent out 1.4 gigabytes of files containing personal information of about 50,000 of its wealthiest clients, including their social security numbers and personal financial data. Luckily, that data breach was fairly limited in its reach, but what if the data got shared on the web for all to see? Who specifically should be held accountable for such a breach? That’s actually a surprisingly hard question to answer. Questions relating to the right to privacy are not new. But with the advent of smartphones, facial recognition software, machine learning, and other FinTech innovations, our right to privacy in a traditional sense is diminishing rapidly. For example, people are increasingly worried about the possibility of being tracked by their smartphone hardware. And many common apps have been breached or even actively misuse our private personal information. For example, Facebook has been embattled over the past couple of years for concerns relating to privacy. As one of the most actively used social media platforms in the world, Facebook is accused of allowing private customer information to be used for several unwelcomed activities. They have even been accused of allowing their platform to covertly influence political elections. Other new technologies, such as voice recognition products and wearable devices, have people worried about who is listening to and possibly recording their private conversations. Just think about it: on a daily basis the majority of us click “I Accept” on so many websites without actually reading the terms and conditions that we are desensitized to the fact that these are actually real legal agreements. If we as society are going to take privacy seriously, we need to consider the moral and legal implications in a practical context – and ensure that we are clear on what rights we are giving away. Maintaining a balance between privacy and profitability in the commercial sector, or security in the public sphere, is an increasingly important challenge. But has the age of privacy in the traditional sense already ended? Have we already given up so much data via social media and smartphones that there is no turning back? And should the race to create sentient AI, which requires massive amounts of data, take precedence over personal privacy? These questions, and many more will be discussed throughout this course, and we look forward to hearing your thoughts on how to best navigate these tricky privacy waters. Additional Readings 1.4 Module 1 Conclusion
Throughout this module we have considered some of the underlying reasons we have money and financial institutions in the first place, thus helping us to understand the ethical foundations of FinTech innovations. The reality is that money is a societal construct based on trust, and the value ascribed to money is somewhat subjective. As a result, for centuries societies have relied on a shared definition of monetary value, as well as trust in banks, to ensure our money and economies are stable and secure. But unfortunately, throughout history including the years since the financial crisis, some financial institutions have forgotten their important role in society, and have breached that foundation of trust. This has led many to embrace non-traditional FinTech innovations as a way to democratize finance, and potentially move away from traditional financial industry players. Both finance and FinTech companies need to keep this in mind, and ensure that their innovations provide the highest level of societal trust possible. By walking through the Wells Fargo case, we have introduced each of the key ethical principles that will be highlighted throughout this course. Once again, those principles are: trust, proximity, accountability, cultural lag, and privacy. Keep those in mind as we proceed throughout the course. Next, in module 2 we will introduce a technology that most of you have heard of before: blockchain. While many are excited about the many efficient and cost-saving uses of blockchain, others have highlighted its use in facilitating illegal activities. Let’s consider both of these together, to hopefully ensure the use of blockchain will be ethical, and will help lead us toward a more utopian society. Module 1 Roundup
– Hi everybody. Welcome to the weekly wrap-up where we discuss various course related matters. First, we want to give a huge thank you to everyone who’s participated in the course so far. We’ve had a really great response and we’re so happy for all the amazing comments on the discussion forums. – Yeah, the response has been great and currently there are 4777 of you enrolled in the course from 154 countries or regions around the world. Which is great. We’re so thrilled to see many parts of the world represented and are especially grateful for those of you who are including specific examples from your home countries and cultures in the discussion forums. Finance, fintech and tech disruptions are affecting various parts of the world in different ways. So it’s great to hear local perspectives on everything we’re doing. – Yeah. In response to the poll questions, it was great to see that most of you believe an hour of education is worth more than 100 eggs. So I guess that means that you value education, which bodes well for our future as educators. But we also think it’s cool that most of you wanted the whole chicken. It shows you’re savvy negotiators. – Now we also thought it was interesting that the majority of you out there still trusts banks over fintech startups and techfins but 28% of you out there don’t trust any of them. Now that’s a really important statistic that we hope you explore personally and in the discussion forums as the course continues. You know its incredibly important that financial service firms do a better job establishing trust in the market place. And if you all out there are representative of the market, then it’s clear many people around the world do not trust financial firms either. – And speaking of trusting banks, many of you commented that you trust banks more because they are better regulated. And your deposits are insured. Now while we agree and think that those are really reasonable ideas, it does make us wonder. Isn’t one of the major focuses of fintech innovation to avoid regulation and government intervention? And isn’t regulation one of the things that makes banks inefficient and hard to deal with in the first place? Now we look forward to hearing your feedback as the course moves on to see how the fintech and techfin firms can continue to build trust while maintaining their efficiencies. – Now when thinking about proximity, 77% of you said that you would pull the lever to divert the trolley from hitting the five people instead killing the one person. But on the other hand, 61% of you said that you would not push the man off the bridge to save the five people. These results largely mirror what academics have found as they researched these questions in the past. This is a great example of how proximity can alter our decision-making. – And you’ve provided a lot of really great examples of proximity affecting our behaviour in real life. Now some of you mentioned how we’d be willing to donate our organs to save a loved one, but maybe not do the same thing for strangers. Or how we often donate to our churches or charities in our local communities, but then don’t do the same thing for problems that are affecting people far away. And one of you mentioned about stealing from a bank versus cyber crime. Now we’re gonna talk about issues of bank theft in the next couple of modules, so remember that one for later. – Now, concerning the disruption that will come from tech innovation. A full 60% of you said that you were either very or somewhat concerned that the disruption and the resulting job loss would create broad social problems. And interestingly when asked what industry you thought were the most at risk of disruption, 41% said accounting and auditing, while 34% said finance. By far the two most common answers. Is that because most of you are from the accounting and or financing industry? And therefor you see the risk of automation in those industries? Or is this just an overall observation? – Yeah, I was wondering about that. Now either way, we hope that you’re all thinking more deeply about your life and your career and how you can kind of future-proof yourself. Meaning planning ahead to ensure that you’re not made redundant and are always employed in doing something that you love and that’s meaningful for society. – Now, we also appreciated all the great comments about cultural lag. We know that this is probably a new concept for many of you, but basically cultural lag means that technology adapts and changes faster than culture. Especially in areas like law and religion. And as a result sometimes technology changes so fast that it takes a while for us to realise the negative impacts that technology may be having on society. – And you provided really good examples of cultural lag in everyday life. Including the impacts of social media on teenagers, especially from cyber bullying which is kind of a sad but really relevant thing. The spreading of fake news and false stories via social media which is made possible because of the omnipresence of smartphone technology. And the environmental costs of mining Bitcoin and that’s one of the topics that we’re gonna discuss in the next module. – Finally we also learned that students out there all over the world are smart and disciplined with their money. When asked what you would do with a million dollars you all answered really responsibly. We were expecting some really crazy answers like trips to Las Vegas or buying an island or something but most of you talked about paying off debt, investing for the future, particularly kids’ schooling and helping the community even. Now, in our next module, module two, we’re gonna look at blockchain and it’s governance. Now we won’t cover blockchain in too much technical detail in the next module, but more look at some of the policy implications and the governance implications of what blockchain technology means to us. We’re gonna look at a few interesting cases including the Silk Road which is a really fascinating case almost straight out of a movie. Additionally we will look at some technical details of things related to like smart contracts and what that means. We’ll also look at form remittances and how blockchain comes in and plays a significant role in transferring money from place to place. Now there are obviously technical details around that. Now we won’t go into specific coding or anything like that or crypto-currency or anything along those lines. But we’re gonna look at some real-world implications of this new technology and how it really impacts people. And hopefully in some ways make improves their lives. – Okay, so you’re not gonna be telling them what crypto-currencies to buy? – Well unfortunately we don’t have that knowledge. If we did, we probably wouldn’t be sitting here and earning professor, teacher salaries from the university. All we can say is make sure you do your research. – Yeah and as the course moves on, we hope that you all still really maintain your engagement and especially within the discussion forums. One of the things that we’re super excited to see were all the comments from all over the world. We had comments from those in Kenya, all over South America. People from Europe, North America and obviously here within the region of Asia where we are. It was super cool to see your comments especially in terms of the very personal and kinda specific way that these things apply within your local communities. Within your specific jobs and careers. And hopefully in the way that you think these are gonna be disrupting and changing the way those things operate. Your communities operate. Your career, your industries operate in the future. So please stay engaged and we’re really looking forward to more. – Yeah and I’ll go with what David Bishop has said, some of the feedback we’ve gotten has been fabulous and– – Really excellent, very thoughtful comments. – The comments have been great, which we appreciate and we, at some point we do get to all of them. One of us will look at all of them and we’ll read them and try to comment where we can. Additionally some of you have told us that you’ve finished the module, you thought it was great and you’ve passed it on to your friends and colleagues and acquaintances. So if you feel like this is really compelling information and things that we’re sharing and questions that you feel are important to think about and consider, please do I introduce it to other people around you and I think collectively we can have increased quality and quantity of the discussions because these are really important questions. – Yeah, one last point. Because I have been reached out, so as we kind of, you’re gonna learn later on. This course talks a lot about migrant workers. So I’ve actually been contacted by literally hundreds of foreign domestic workers here in Hong Kong, mostly from the Philippines and we wanna say thank you for joining us and say don’t worry if you’re not a finance expert. We’re super excited to have people from developing countries all over Asia and really all over the world that are looking into these insights and really kind of engaging in these topics and conversations with us. So if you don’t understand everything, that’s totally okay. Within the discussion forums it’s absolutely appropriate to ask questions, right? And really kind of engage with us in that level because we really hope that all people and all communities can kind of benefit from these insights going forward. – Absolutely, because one of the key components I think of our course about this idea of the ethics of new technologies, the risk of new technologies is that it should be inclusive, right? – Yeah. – A part of our mission of doing this course is to create a greater level of financial literacy as well as digital literacy around some of these new trends. That people have the right questions and think about kind of the right factors to weigh as we go about in this kind of aspect of the new world and technological future that we have. – Yeah, especially because it kinda gives the wrong impression that we actually know the answers. (laughing) I mean the reality is these are very complicated very complex problems that really haven’t even fully developed yet and so again this is meant to be a broad conversation and so we really appreciate all people of all levels joining in and kinda sharing your insights. Asking your questions and we’ll be asking them too and we’re very sincere in that because we don’t have the answers, but we hope that society and our community within the course kinda develops some insights in some of these questions together. Okay, so speaking of the great discussions we’ve had. We did want to give a shout-out to a few of the people that are posting, that are posting because we really appreciated some of their comments. So first of all, from peter-nyc. He had some really good comments and the one we want to talk about is about the way he highlighted kind of the role of or the purpose of a business and kind of the way the society defines success. So he noted that maybe the behaviour of Wells Fargo was to be expected because what we’re kind of taught in business school is that the role of business is to generate profit and especially for the shareholders. And that’s exactly what they did. And more specifically on the kind of personal side he said that for many people the definition of success is really largely kinda tied to their paycheck basically. – Yeah. – Right? So what were some of these thoughts– – Yeah, so I thought Peter’s comment or peter-nyc’s comments were super insightful but I think on the first point about what the purpose of businesses is as it relates to profitability, that has kind of entered into kind of academia and filtered into mainstream business roughly the last three to four decades. – Late 70s. – Yeah, as kind of like, as the norm. And if you didn’t do that, somehow that was incorrect and actually that’s not true and I think both of us teach that in our normal classes that we teach in business ethics and things like that. That actually outside of a few circumstances actually profitability, profit maximisation is actually not a legal requirement in many situations and in terms of schools of thought beyond kind of traditional Euro Chicago Milton Friedman-esque profit maximizations there are actually other schools of thought which I think Peter in his comments, he seems to be much more heavily influenced by management gurus like Peter Drucker who kind of subscribes to who kind of advanced some of the things I think Peter is discussing. So I think that’s important to point out that profit maximisation is not the default and doesn’t have to be an increase in just being a shift away from that. I think to the second point about how we value ourselves. – Yeah. – Is it based on a paycheck? I think that Peter rightly pointed out, peter-nyc rightly points out, that’s faulty logic too and if you are always– – Or at least it relates to maybe difficult unintended consequences. – Yeah, I mean and if you’re always measuring yourself using that as the metric, you’ll always be behind. – Yeah. – For that’s the one thing. You’ll never be happy or satisfied. But we also know of psychologically speaking that money actually doesn’t bring happiness. Now for sure, if you don’t have money, enough to cover kind of the needs that you have on a day-to-day basis, we also know that will make you unhappy psychologically. But we know that once you move beyond that, the more money you make doesn’t necessarily make you happier. – And we both know enough wealthy people, I think, to say pretty definitively that the size of your checking account doesn’t necessarily mean you earned it or you’re smarter– – Sure, sure. – Than everybody else right? – In many respects you may have been luckier than everybody else. – Yeah, yeah. A lot of it is timing and what not. So typically what I tell my students when we’re teaching business ethics is that the coolest thing about the concept of success is that it’s a completely subjective term and you get to define it. And so one of the things that we want to do within this course, the reason really I think, the underlying reason why we decided to do this anyway is because we wanted to kind of highlight a different definition of success and have people, especially in the financial and kinda tech industries, think ahead and say okay what type of future do we want and how are we going to define success? And I think peter-nyc, one of the things that’s really encouraging is to see so many leaders in the financial industry really also leading now in this kind of fight against short-termism and fighting against this idea that all you should be doing is looking at the quarterly earnings through the statements and stuff and really pushing towards more long-term investment. So you’ve got leaders at BlackRock and JP Morgan and Goldman Sachs and other places that are really, actually including Warren Buffet himself, saying that focusing only on short-term profits is probably a losing battle and it’s something that we need to move away from. – Yeah, and I think what’s also interesting some of the follow-up comments or in the follow-up comments that peter-nyc had, he talked about culture a little bit which I think is really important. I think both at the individual level but how we measure ourselves. What’s success? That’s all part of like the cultural mantra that you want to have for your personal life. But also, I think he was talking about more organisations banks and things and other institutions. The culture that you have and he shared an article where it talked about students who went to Harvard Business School. – Yeah. – Their idea was, I want to go change the world, but they come out wanting to work at an investment bank. – Yeah. – Having worked at an investment bank– – Our students. – Yeah, having worked at an investment bank, there’s nothing necessarily wrong with working at an investment bank and that’s fine, but the idea of how being in a particular culture then changes what seems to be important to you, what your aspirations are. I think that’s important to understand because I think, if you have certain values, you wanna put yourself in the culture, or create the culture around you and within you to ensure that you can live true to those values. Which I think is quite important. – Yeah, so that means peter-nyc, if you are from NYC, if you’re from New York City, which is one of the world’s other great financial centres, then share this course with your friends and colleagues and lets start changing the narrative and having people define success differently and start looking at the role of financial institutions and all companies in a more ethical and kinda morally centred way. – Thanks Peter. We’ve got another comment from JoergHK and hopefully that means you’re in Hong Kong, which is so great. But the comment was in response to the question about who is responsible for the negative consequences basically of tech innovation? And JoergHK talked about perhaps that the government, the burden of that responsibility lies on the government to find policies and fund policies through taxation to deal with the negative externalities related to these consequences of technological innovation. So that raised a very interesting debate I think on the discussion and this is something we frequently discuss between ourselves and within our classes here at The University of Hong Kong about who is responsible for potentially the displacement of labour, people who can’t find jobs. Technological advances that may make certain companies obsolete. Who are responsible for those things because on a wide scale those things will happen and already have been happening. And again, that created a bit of a discussion and we have Jessielam2018 username responded. Maybe the whole obligation didn’t actually fall on the government, but there are other stakeholders who should get involved. So that’s a great place to start with this very important question and David, what do you think? – It’s super complicated and this is really at the crux of what we want to talk about for the whole course. And so again really appreciate the kind of sophisticated dialogue, very very kind and thoughtful dialogue as well. I think one of the things we’re trying to do is to foster a place where people can disagree in a respectful way and we appreciate you for doing that. So on the one hand you have the role of the government and Joerg also mentioned, JoergHK also mentioned, statements within the World Economic Forum and the idea that, yes people are going to be displaced. – Yeah. – Right? And so there are going to be a lot of jobs that are new, invented. But there’re gonna be a lot of people who are not really able to move or upgrade within the workforce. And we’re seeing that across the world, right? So again, I’m from the US. I’m from an area where they had a lot of manufacturing, even just a decade or two ago. In fact, I used to work, you may be surprised to hear, I used to work– – That you used to work? – Yeah yeah. – That is surprising. – I used to work in a woodworking factory. So I worked in a factory in rural Southern Georgia with kinda salt of the earth, normal people and the reality is that with automation, those are the types of jobs that over the past 20 years have largely gone away. And now– – Or, moved to other countries– – Well, sure, sure, excuse me. They’ve left my area of the US, so you see a lot of areas where they used to have a lot of manufacturing work that is now gone, so in the Rust Belt for example. Ohio, Pennsylvania, etcetera. And that has had a massive influence on politics and so many aspects of everyday life. Including social ramifications such as the opiate crises which is now going on. So you can see this is like a domino effect of negative consequences and so that’s why it’s so important for us to think ahead and some of the comments that everyone made was that this is not just restricted now to manufacturing or previous to that, agriculture, right? So now we’re seeing that people are saying well, we think that finance is gonna be disrupted, we think that accounting and auditing are gonna be disrupted. We’re former lawyers, we think that the law– – The law will be– – Yeah, legal field is gonna be disrupted. And so the question is, how do we kind of reeducate and reintegrate workers back into society? I do see Jessielam2018’s point though. This is a problem. The government is typically reactive, right? – Yeah. – And often can’t be proactive in these areas and so we do need other aspects of society and from the government’s standpoint we need more proactive kind of positive incentives as well. So she mentioned tax policy for example. So you can have taxes, tax policy that encourages people to donate to charities for example. Or maybe we can create tax policies that encourage more innovation or job creation and other things. – Yeah and I think one thing that is important to point out is that this trend that David Bishop just described and that was described by JoergHK and followed on by Jessielam in the comments and discussion and by others, is that this trend of displacement of labour is actually not new. – Hmm. – This is something that we talk about a lot because of technology. Maybe it’s because that the pace of that disruption is increasing. – Right. – But even if we look back 20 or 30 years even and we look at sources of, locations of manufacturing of like electronic goods or garments or shoes even, we saw this move from, it was in Japan, then moved over to places like Taiwan and Korea. – Including here in Hong Kong. – And in Hong Kong. Then moved over to mainland China. And mainland China was the place for much of that manufacturing was of low-cost, somewhat skilled labour and that was a nice combination but as that skilled labour started to creep up in expenses, then much of that manufacture has moved either inward into other parts of China or actually to parts of South East Asia as well. – Right. – And so this pattern of displacement is not necessarily just driven by technology alone, obviously driven by cost and those combinations I think are intersecting now to make that pace of change a bit faster. Perhaps what we’ve seen and so one thing that we I think need to do as we continue this process is to place them in the proper historical context and understand that often that we like to think the situation we’re at, at that our point of history, is quite unique and frequently there could be some unique aspects of it, but most of the time something like that has probably happened in the past. Or you know kind of putting it in the proper context is helpful and I think the thing more practically speaking related to policies, is that I think if we rely on governments alone to solve the problem, the problem kind of the issue that we run into, is that there’s usually a timeline mismatch. So particularly for governments and politicians where they’re elected into office, their kind of view or their perspective of time is basically the time they’re gonna be in office until the time they’re trying to get reelected. Whereas if you, the labourer who is getting displaced, your perspective may be very different. And so if you rely solely on government to solve those problems, to be frank, that could be a bit precarious to be fair as well. And so in that capacity then who is involved? There is the government, obviously. There is the individual, but should industry be part of that process? Of course. I definitely think so. Should educational institutions like us? Of course. I think there’s a very healthy debate that could be had about there are traditional research-based universities like we’re at, The University of Hong Kong. – Right. – And they do serve their purposes in certain ways but do all institutions of kind of tertiary educations have to be like us? I don’t necessarily think so. – Right. – Could more of them be tailored to helping re-school, re-skill people who potentially are at the risk of being displaced? For sure. And I think we need to think about that and then how certain tax policies, tax credits, government policies, government credits could be applied to make that more effective. – And there are a lot of government resources that are being utilised for that purpose now. – At least in certain countries. – Schools in South Korea for example because there aren’t as many children in primary and secondary schools. They’re actually bringing in the elderly from the countryside to come in and learn how to read and actually so they can keep those resources being fully utilised. And so here at The University of Hong Kong, my colleagues and I actually created a weekend programme called Empower You, where we bring in migrant workers, foreign domestic workers, mostly from the Philippines. And they come and use the resources and are taught by professors and company leaders and stuff in areas to help them improve their skills as well. So I definitely agree with David Lee, this is not necessarily anything that is new, however, I do hope that there is one thing that is different from before, like in all aspects of society and all aspects of learning, our hope is that we can learn from the mistakes of the past and kinda forge together a better version of the future this time around. So if we are going to make a difference, if we are gonna have a better future, this means looking back at the last 30, 40 years. Understanding what went well, what we can change. So that as this new wave, this new fourth industrial revolution kind of sets in, we will have a plan in place and society doesn’t have to reap these kind of negative unintended consequences. You know one of the discussion that I found most interesting, probably ’cause it kinda hearkened back to my legal training, was the conversation about accountability and especially who should be accountable for harmful, negative, maybe even untruthful things that are posted online. Should it be the person who posts them? Should it be the platform or some combination of the two? And a lot of you kind of chimed in and said that it is the poster’s fault and the poster should be accountable for anything that is harmful. Anything that is untruthful. And so we had one comment from a user and I’m not sure how to pronounce your username. Xeilani or something? X-E-I-L-A-N-I. Really really insightful, kinda long comment that created a nice dialogue. Really focusing on how it is the person who makes the comment that it should be responsible for those words, especially if it’s untruthful. And how there, obviously the platform was also responsible for having kind of a range of tools to make sure that you know comments are being read and analysed properly. But really at the end of the day it’s kind of up to us. And so my comments back was, well totally makes sense and actually regulation which is required is probably enough or more regulation is inevitable. But one of the things that I was wondering was, who is it that we trust to define harmful, or even untruthful, right? Because unfortunately truth is often in the eye of the beholder and it’s often very difficult to kinda decide. And it kind of spurred a nice conversation. So what, where do you fall on this? I mean, there’s no answer and there’s certainly no right or wrong answer yet, but in terms of this conversation of like increased regulation and kind of punishing the people that put harmful things up there versus the platforms, I mean what are some of your thoughts on this? – Yeah, so I think kind of big picture. Like if we look at the kind of the ecosystem so to speak of the relevant parties involved, you know there’s obviously the content producer. There’s the platform. There’s the viewer. – Yeah. – Should that responsibility portion kinda amongst those main parties, or were there other parties that we’re forgetting about that should also be included. So I think, maybe that’s one of the kind of foundational questions you start with in terms of comp. If we look back at different forms of media publication, frequently it was the platform that had more responsibility for the content they put out. – And largely– – The difference is– – Well go ahead. – Here we go right? This is the difference, they’re screened. But they can still– – Well, they’re screened and publication was very very hard. Right? So there were only like a handful of newspapers, magazines whatever right? – And if you were an entry-level reporter a generation ago, maybe you started off as a fact checker. – Yeah. – Right? And so this is why there was credibility associated with many publications that were kind of well-known newspapers or well-known kind of magazines, current events magazines. Because people had a belief or faith, and they had a system that kinda filtered out the things that were incorrect. – Yeah. – And if– – And you had limited circulations so you had to be truthful because if you lose the consumer’s trust, you lose their business. – And your advertising business as well, right? And then, and part of that too is if they did find a problem, they would clearly correct it. They would say this is a correction, this was incorrect in the last article or something like that. And there was that kind of process that people felt like these were the norms of information kind of sharing and media. I think now with the platforms that we have that’s not the case obviously, because you can produce something and it will kind of basically go through no screening on a lot of platforms and will just be posted. – Yeah. – Right? And then, which is a very different kind of issue and then if there are inaccuracies there, then that process of rolling that back is a little bit more difficult than we had in the past. Now there are countries I think as we mentioned later in some of the modules that are trying to address this through legislation. Singapore is one of these countries. United Kingdom is one of these countries that are trying to introduce legislation to make platforms more responsible. Particularly for the veracity or truthfulness of the things that are posted. And those are countries that have decided to go in a particular direction to make more platforms more responsible. It remains to be seen how that plays out. How it’s implemented. Some of those laws are just planned but they haven’t passed any sort of legislation yet. And so that, one that are of interest, but two I think the one of the broader issues is for things like a lot of platforms, in terms of where they’re hosted, where some of it’s being stored. You know there’s definitely a multi-country a multi-jurisdictional component of it. – Yeah. – Just because you get one country on board doesn’t actually mean you can solve the whole problem. – Yeah. – That’s another problem. – So what you’re getting to is the kind of Lego concept. We talked about accountability but let’s be more specific about liability, right? This is where it really gets challenging but let’s say that Xeilani or however you say your username, let’s say that it should be the poster who is ultimately liable for what he or she or it, if it’s a company or bot or something, posts, right? What if that person is located in a different country? Which is very possible if not likely. Do you have zero recourse in terms of finding and then actually suing that person? Right? So this is actually really reminiscent of a legal discussion that went on actually in the 1970s tied to manufacturing very much like what we’ve just talked about a moment ago. When manufacturing was done in one country, let’s say the US, you’re manufacturing for US consumers, it’s really easy if they manufacture something incorrectly and someone is harmed by it, you just go after the manufacturer. But then what happened was, all the manufacturing in the US started going abroad, overseas, right? And so then all of a sudden, if you buy a toy or if you get a toy in a McDonald’s Happy Meal that ends up, like unfortunately harming your child, let’s say. You’d have to find the factory in that overseas country and actually go after them directly. That was almost impossible which meant that product liability as an entire legal concept, essentially became useless. And so then the result is, what they, they created something called Strict Liability, which means no fault liability. And this is complicated but basically, let me summarise by saying, the point of these laws is to ensure, if possible, that the injury doesn’t occur in the first place, but then if it does, then the person who is most in the best position to make sure it doesn’t happen, actually kinda provides compensation to make the injured party whole, okay? So what that means is, what if you get some random person that you cannot identify, who is on some island somewhere that’s making false or fake comments, who is in the best position to ensure that doesn’t happen? Maybe it’s that individual, maybe it’s Google, maybe it’s Facebook, I don’t really know. But here’s the other side of the coin though and this is where it gets really really challenging. Because when you talk about restricting speech you are unintentionally also limiting political speech. Religious speech etcetera. And this is actually playing out right now. So you have, because these large platforms, including YouTube, Facebook etcetera they’re really concerned about fake news, hate speech and these things that are kinda creating big losses for them, they’re now starting to pull off people from their platforms, including a disproportionate amount of very conservative political people that’s begun political things. Commentators. And so now you have people that sit on that end of the political spectrum are saying, wait a minute, this is discrimination, right? How is it that our ideas are somehow harmful? How come our ideas? And again that’s easy for us to say when we disagree with those ideas but what if it’s your religion? What if it’s your political ideas that are being suppressed because you’re no longer able to get on that platform? These are very real concerns that on the one hand we want to make sure that these false statements are addressed but at the same time we also have to understand that every time we limit speech, we’re limiting one of the most fundamental rights that people have. It’s very challenging. – Yeah and I think that’s a great point and it raises a connected question about do people have right, they have a right to speech in most countries. – Right. – Or a lot of countries that we deal with. But do they have a right to that platform to express their speech? – It’s a valid platform, yeah. – It raises other questions about you know kind of access and at a certain point access to certain digital platforms is that a right? Right now we generally say no, but like in this part of the debate it becomes an interesting question. I think if we go through some of the discussion points in response to the, what Xeili first brought up, there’s some really interesting points. I think peter-nyc had another great comment about this idea of how we design some of these platforms– – Yeah. – Which we thought was really insightful. If you go back and look at some of the early historical narrative around some of the wide-spread social media apps that most of us are familiar with, in a lot of situations those founders in their initial kind of creators of this, actually didn’t have a deep understanding of what this would become, right? And so, on one hand, how do you kind of a model for oh we’re gonna have this kind of impact. Because at a certain point when you’re just a handful of friends starting something that you think over, we’re gonna be able to influence three billion people in the world, that’s pretty– – Yeah. – Arrogant and in some ways crazy, right? – Yeah. – One or the other. And so how do you model for that? That becomes a difficult question. But then once you start approaching it from, should you be doing something for that, right? And that again raises an interesting and insightful question about this idea of how do you create the right structure of, for these platforms to police themselves? – Yeah and one of the things that we’re gonna talk about later on is access to algorithms. So as machine learning, AI and these other things become more prevalent, I mean we’re using these things every day, whether you realise it or not. But as they become more prevalent we have to understand these are literal black boxes where we can’t see the algorithm. I watched, do not do this, it’s a waste of time. I watched a 23 minute video today from a prominent YouTuber about the YouTube algorithm and how difficult it is as a YouTuber to understand that algorithm and to create viral content. But what he was saying is that the reason why YouTube is getting more and more clickbait-y as he said and why it’s more relying on the thumbnails more and more, which is why you’re seeing a lot of maybe irreverent pictures or things that really cause people to click on them is because the way that they created the algorithm, it really– – To reward that. – It really rewards the click-through rate. CTR. The click-through rate. And so he was saying that you have to, as a creator, you can’t be as thoughtful about those things because it kinda driving people in that direction based on that algorithm. Now we can’t see that algorithm. Right? This is one of those things. And so the question going forward is should we be able to? Should this be a public good? These are some of the questions that we’re going to address into the future. And so I think as we go forward again, really really appreciate these very thoughtful comments because these are the broad questions, these are the new social goods. These are the new commodities. And so as we move forward, as we think about how these companies should be operating, we really need to think through. Now one last point, sorry I know this is already running long. One last point is, when we talk about these billionaires that own and operate these massive technology platforms, one thing to consider which is totally new in this landscape, is that because of what is known as weighted voting rights, you now have people like Mark Zuckerberg who do not own a majority of the shares in their company anymore. So he does not own a majority of Facebook and yet he has almost complete control over what Facebook does, because every one of his shares has 10 voting rights versus if you own Facebook, you only get one voting right. – So different class of shares. – Different class of shares. And so as a result of that you have guys like Jeff Bezos, or the people that own Alibaba that have an immense amount of power and control not only over your data and privacy which we’re gonna talk about going forward, but literally the news content that we read every day. Right? The types of products that we see and buy. The type of news, there’s so many things. – And ironically I guess, the historical, at least the modern historical genesis of modern kind of weighted voting or multi-share class kind of structures was to give, allow media companies to give them a little bit more of editorial independence– – Yeah protection from shareholders. – Protection from being influenced from being, oh I don’t like you sharing this kind of truth and so allowing them to insulate themselves from that and now it’s maybe a little– – It’s gone full circle. – Kind of a little backwards. That’s interesting. I think the other thing that we’ve talked about in the past but which is incredibly relevant to what you’re talking about now in terms of the pervasiveness of some of these social media platforms, how they influence us. And we both use them a lot ourselves and so we’re not saying that by default they’re evil, but more that we should just be aware. I think, in my own world working with some technology companies and start-ups, it’s incredible and even if you talk to people who are on large social media platforms or look at how they describe the platform design, it’s intentionally created so people stay on it, right? – Yeah. – The user interface is very much a combination of human behaviour psychology as well as kind of graphic design and other things to make sure people stay on it and a lot of people are putting together these platforms now as part of their design is what can we do to somewhat manipulate people to stay on it. And this is part of the business model. – Yeah. – So this is important for us as consumers to be aware of that. – Yeah. Okay, if you get us talking we’ll talk forever. So we’re gonna end here, but we hope to see you in the blockchain modules so we can talk about how some of these new emerging technologies are gonna be impacting our lives, for good, maybe a little bit for the negative and how you as an innovator, as someone in the finance industry or just an interested party can utilise these technologies for your own life and for your own career. Module 2 Blockchain and Its Governance Module 2 Introduction
Hi and welcome back to Module 2! Thanks for sticking with us. We promise it’s only going to get more interesting! In this module we are going to talk about blockchain, which is really one of the key catalysts for the rise of fintech. Now a few upfront caveats: The focus of this module is NOT which cryptocurrency you should invest in and if you have followed cryptocurrency markets, it has been particularly volatile, so as the cryptocurrency enthusiasts like to say, “Hodl”—hold on for dear life. Frankly, we don’t know which cryptocurrency you should invest your life savings in, so please don’t ask, and if we did know, honestly, we probably wouldn’t be doing this course, we’d be at a warm beach. So another caveat is that this module will also not discuss initial coin offerings- ICOs, IICOs, STOs, or any of the variants by which someone might try to fundraise or monetize for their blockchain project. Don’t get us wrong, these mechanisms are all interesting, but there is so much information to cover, it could easily be its own course, and because of changing laws and regulations in different jurisdictions, it’s difficult to explain in a snapshot format since the regulatory landscape is constantly changing. Maybe most importantly, though we are both lawyers, we’re not your lawyers, so if this is something you are thinking about doing as part of a blockchain project, please speak with your lawyer. Now given what we just said, the focus of our module is more about questions that might be good to consider as blockchain technologies become more pervasive. Really what are blockchain’s implications–both the wonderful disruptive possibilities that it represents as well as potential issues we should consider before completely embracing it. You’ll find we won’t focus too much on blockchain’s technical details in this module, basically for two reasons: 1) because blockchain and its applications continue to grow so rapidly, things will likely have advanced a bit between the time we prepared this module to the time you end up watching this; and more importantly; 2) the next course in the Fintech Certificate, which our Fintech Ethics course is also a part of, is entirely focused on blockchain and is taught by one a wonderful colleague of ours from the Faculty of Engineering at the University of Hong Kong, who is a real technical expert in the space. So if you find yourself with an increased interest in blockchain, please be sure to register for the next course, “Blockchain and Fintech”. Ad-libbed: So one quick question for you as we keep using these terms interchangeably; what is the difference between blockchain and cryptocurrency? That’s a great question David, and I think a lot of people sometimes use those interchangeably. Effectively, cryptocurrency is one of the outputs of the blockchain. So, as people mine – and we’ll talk about some of this vocabulary in a little bit in our course – as computers that are part of a blockchain network mine and solve problems to basically build on to additional blocks in the blockchain, coins are produced and part of that is to incentivize these miners to do the activity. So keep that in mind as we go through. So, sometimes the blockchain, which is a distributed ledger network can be used for all kinds of things; including tracking, certain types of goods or services, even people, whereas cryptocurrencies and various forms of cryptocurrency are specific for new forms of payment. 2.1.1 What Is Blockchain Technology?
In its most basic form, a blockchain is a distributed ledger, essentially a series of digital records, referred to as blocks, which are connected together forming a chain of records, hence blockchain. Instead of this data being kept in a single place though, the information is replicated and distributed across a peer to peer network of computers. The network collaborates together to confirm if new blocks of data can be added to the chain, which makes it difficult for a single member of the network to add incorrect information. Additionally, such a decentralised nature also makes the blockchain difficult to modify thus preventing tampering. Though a number of folks had researched and thought about blockchain and many of the cryptographic technologies that underpin it before, the concept of the blockchain and its offshoot – cryptocurrencies, really entered into the public domain after a white paper was published in 2008 by Satoshi Nakomoto, titled: Bitcoin: A Peer to Peer Electronic Cash System. Now, the paper described bringing together various technologies and cryptographic methods to form the Bitcoin protocol and has gone on to serve as a framework for many of the subsequent blockchain related advances in the FinTech space. So who is Satoshi Nakomoto? Though there has been a lot of speculation, Satoshi Nakomoto is a pseudonym, and the general public really does not know, at least not yet, the identity of this person, or if Mr. Nakomoto is a single person or perhaps even a group of people. And even if we never figure out who Satoshi Nakomoto is, there is a real possibility that history will look back on his 2008 white paper as a seminal moment that fundamentally changed the course of history, or at least financial history. 2.1.2 How Is Blockchain Governed?
When considering FinTech governance, especially for blockchain technologies, is lack of regulation a pro or a con? Blockchains are effectively regulated like industry groups, or even members only clubs. And the mechanism for governance is generally based on the principle of majority rule. But is majority rule always right? Now this is like straight back to the Greece, right? But the reality is that most modern democracies are not actually direct democracies where the simple majority always wins and governs. So this is why we think that Bitcoin and blockchain are simultaneously so appealing, and yet so threatening. Because of the one-person one-vote system idea is basically built into the code. And so whoever controls the majority, they also get to rewrite the rules. And your identity is typically quite anonymous, so it’s difficult to identify who the other actors are. And so these principles raise a whole host of interesting issues. Because as you think about particular blockchain protocols be it Bitcoins, Ethereum or other forms of widespread protocols that are gaining more and more types of different use cases, we could easily imagine a situation, where a particular protocol application becomes so widespread, and affects so many other people. Do we want that to be governed by the members who have the coins who can vote or should that be regulated on a more national or even international level? What process would you trust more? Now we’re not advocating that blockchain should be governed at a more national or international level, or have greater regulatory scrutiny per se, but it just raises the question: as these technologies are becoming more pervasive, is the current governance structure the way we want to deal with that? Especially if it is going to impact so many other people who are not necessarily part of the “member system”. If you consider voting from a corporate governance perspective, the concept of majority voting otherwise characterized as one share, one vote has long been the general rule. But while things definitely started that way the reality is that a whole host of diverse voting mechanisms have been adopted to ensure proper governance. For example, supermajority voting has been legally built into many aspects of the corporate world. An example of this would be a special resolution to change the name or nature of a company which would require a supermajority of the shareholder votes. Beyond that basic democratic majority or super majority voting rule is not always the most efficient way to decide something. Now we have things like accumulative voting or other different methods where like a minority shareholder or a voter could have a stronger influence or a voice on a particular matter. So if we apply this back to blockchain and cryptocurrencies at their genesis, we need to consider what the best way is, for us to manage them. Should there be a more comprehensive type of voting or control structure? Or do we really want a simple majority rule, and just give power to the people? These are the type of questions that are going to take some time to answer. We talked about governance and how some of these protocols are governed by users, and fundamentally we have to remember that blockchain seeks consensus first and not necessarily fairness or efficiency. And that could be right or wrong, it’s something we’ll have to consider in the future. But will blockchain and its uses create greater inequality in the long run? And if we jump ahead, will people that are already left behind be further left behind? One of the novel uses of blockchain is coupling it with something called a smart contract, which are not really smart and may not always actually even be a contract. So now that you’re probably confused, let’s talk about it. Additional Readings 2.2.1 What Is a Smart Contract?
The term “Smart Contract” sounds really exciting and futuristic, right? But hold your excitement, because the current form of smart contracts are neither smart nor even contracts. Computer scientist Nick Szabo, an influential figure in the blockchain and cryptocurrency world, is credited with initially coining the phrase “smart contract” as early as the mid-1990s. A smart contract is simply a computer protocol, really some lines of code that automatically execute a specified action, like releasing a payment, when certain conditions are fulfilled. So this code might represent an aspect of a contract, but the code itself is not actually a contract. Additionally, it’s not smart because a person still needs to think of the terms that will be represented by the code. So someone like a lawyer is still needed to think through and negotiate the terms to be coded. So if these smart contracts are actually not smart nor contracts, why are they so special? To answer that question, imagine you are cleaning out your room and find a tennis racquet you never used and now want to sell. You go online and are able to find a buyer, say David, that lives nearby. You set-up a meeting and show David the tennis racquet. David confirms his interest and then gives you the money and you hand over the tennis racquet. In this example, there is minimal risk that David will be able to run-off with the racquet without paying you. But let’s imagine the same situation except you live far away from each other, so you aren’t able to meet, do you feel comfortable sending the racquet through the mail and trusting David to pay you? Now this type of risk is usually less of an issue when dealing with large companies, like when you order a t-shirt from your favorite brand’s online store, or people you may have repeat transactions with, but for one-off situations or large, complicated transactions, like a home purchase, there can be some uncertainty about payment, delivery, quality of product, etc. In such a situation then, what if you can find a third-party, say Jon, to take the payment from David before you send the racquet, and you’ll get the payment from David when the racquet is received? Would you feel more comfortable? This is exactly how smart contracts work: using “if something happens then…” or “when something happens then…” type of logic to solve this problem. So in our example, if a specified contractual term, say racquet delivery, had been fulfilled, then the protocol would execute release of payment, thus solving the problem. So how does this relate to blockchain? With blockchain technology, these smart contracts can be stored or embeded on a blockchain, so instead of being visible to only the counterparties that may have a copy of the contract like in a traditional contracting situation, a smart contract is available widely for inspection on the blockchain. In the example of selling your racquet, not only you, David and Jon know about the contract, it is also visible to the bank who processes David’s payment, and the delivery guy who delivers the package, and every other actor that’s involved in this transaction, or has access to the blockchain in general. The distributed nature of the blockchain makes it difficult for a bad actor to not pay, delay payment, manipulate terms, or otherwise deviate from the terms of the original agreement because the terms are recorded across the network and cannot be changed. And once they are fulfilled then payment is self-executing and happens automatically. Which means, when the blockchain tracks that the racket is received, the money will be sent to your account automatically. So what are the benefits of a smart contract? Well some things that maybe come to mind are: One, these things don’t require human interpretation, hence taking out some human error. The reason for that is because they’re self-executing. So there’s no issues with a human doing something incorrectly, as part of processing a contract, or it removes some of the temptation that, maybe someone feels of like: well if I keep my end of the deal then I end up being worst off. So it removes this human temptation issue. Additionally, once a smart contract is coded in, generally it can’t be changed, so it’s immutable. Now because of those factors, this ultimately should save time and money, thus making things more efficient and reducing transactional friction. Additionally, if we tie this back into the tennis racquet example, it removes the need for a third party. You see for lots of transactions historically, a third party has been necessary to hold payment or collateral due to risk related to a lack of trust, which is something we’ve talked about. Now perhaps the most common form of this type of third party is something known as an escrow agent. Now imagine that instead of buying a tennis racquet a US company is trying to purchase a big building in another country, say, China. They do not know each other, and they cannot meet somewhere with a pile of cash to make the payment and sign the deed at the same time. So the two contracting parties may enter into this staring contest of “who’s going to pay first?” or “who’s going to act first?”. In this situation, an escrow agent would serve as the third party, or a middle party: on one hand holding the payment from the US company, and on the other hand holding the signed deed or legal agreement from the building owner. And once the two parties agree to pay and finalize the terms of the transaction, the agent will transfer the money and the deed simultaneously, so ensuring the building owner will get their money, and the building purchaser will receive the legal title and the relevant documents, so they can own the building. As you can see smart contracts would serve the purpose of cutting out the middle party be it Jon in the tennis racquet example, or the escrow agent in a large international real estates transaction. And as we previously discussed, a lot of time and money can be saved by cutting out the middle men. But does that mean smart contracts are great solutions for all contracting relationships or situations? The answer to that is “no”, and we’ll discuss why that is in the next video. But before that, we’d like you to think about a question: what are the situations in which a smart contract may make your life easier? Additional Readings 2.2.2 Applications of Smart Contract
So, as we discussed about smart contracts, we mentioned that smart contracts may not be the solution for every legal problem. Definitely. So why is that? Because I think people think it’s “smart”, it should just evolve and it’ll be okay, but that’s probably not the case. So why is that? Well, so like you mentioned, smart contracts have been around, the concept has been around, since the 90s, and yet, the vast majority of people don’t know what it means or have never actually used one before, because in reality smart contract is really hard. Basically smart contracts are typically a binary solution, “if this then this”, it is really much like computer programming. And, it would be a legal situation if I tick off all these boxes, then you are automatically going to remit the funds or transfer the deed or whatever it is that is the outcome of that, that contract. But if it is not a situation where you can just tick off those boxes and have like you know, “if this then this” type of solutions, which most legal situations are not like that, which we know. Then the smart contract is very, very difficult to include. I think maybe as AI becomes better and machine learning gets better, then maybe it will be able to get on to the periphery and deal with those grey areas a little bit better, but until then, smart contracts are going to be relegated to very simple, very rote types of, “if this then this” type contracts. Interesting, so, I think there’s two things that are really interesting about that. One is, the idea that a smart contract is kind of like an oxymoron, in the fact that it actually is not that smart to be frank. Like an honest lawyer. Just kidding. But you know, secondly, I think the point about that, the applications of smart contracts will probably be very applicable to the routine and mundane. Potentially. Well, not to say not important, but just to say: So if you and I are buying, let’s say I’m buying a building from you, and you are in Seoul, and I am here in Hong Kong – there’s a lot of variables in that. Right, so, I have to do my due diligence, to look at past history, understand potential legislation, I have to look at the foundation, I have to look at utilities, I have to look at mortgages. All these other things. So, typically when you enter into a contract that’s complex like that, it will have conditions precedent and all these things. So really quick Dave, so we understand what that means, but what is a condition precedent? It means like it is a condition that precedes the closing. So if we enter into a contract, we sign it, but I’m not going to give you the money yet – and you are not giving me the deed yet. Instead, we have to go down a list and confirm every single thing has been done. Right, so, I’ll usually get a few months. I’ll look, okay, is the foundation solid? Yes. Do my engineers like it? Yes. Research litigation, is there any litigation history? No. Right, so then after you tick of all those boxes, then you agree to finally give them the funds and you transfer the deed to me. It’s simple, but it is obviously very complicated – because life is complicated. I think it’s interesting that you talk about the idea of the complexity of life. Because I think what we are talking about really is: any time there is some major qualitative assessment that’s necessary, then it’s going to be very difficult for a smart contract to really be applied to that. It’s where those variables are really minimal or not existent – or it’s very vanilla – about “Okay, this is what needs to be done, this is what you need to do”, and those responsibilities are very clearly defined that we can rely on smart contracts. And here’s the interesting thing that a lot of people don’t think about when they think of contracting. You legally have the right to breach. Right so when you enter a contract, and there’s ethics issues in there, and obviously you want people to fulfill the agreement – but you always have the right to back away. Now, there is legal ramifications for that. If you stop paying your mortgage, they can take your house. You could pay a fine. Yeah exactly, pay a fine, whatever, but the point is, if there is some underlying condition where I need to stop paying my mortgage, I have the right to do that. Within a smart contract, you don’t have that option generally speaking because it is again – – upon the conditions being fulfilled, it is self-executing – it executes automatically. Right so when they say smart, what they mean is that it does not require human intervention to execute and fulfill the terms of that agreement. But it’s like a roller coaster. Once you are going down the hill, there’s no pulling back, you’re kinda stuck with that ride. So, there’s a level of commitment that’s required if you go down this route. Which is why I don’t think you are going to see any time soon any type of complex transaction where people are using smart contracts. Everybody wants to be able to get to the end of the line in that roller coaster analogy – they want at the very-last moment, they wanna say: you know what, I don’t want to get on this ride. Even if that means they have to pay a fine. Even if they have to pay a breach fee or something. I need to get off this ride. And I think for a lot of companies and a lot of transactions, they need that. Yeah, I think you’re right. And I think for complex type of transactions, you’re right. I don’t think, the use of smart contracts won’t necessarily proliferate in the near-term at least. But, I do think there’s a wide variety of daily contracting that we just normally do, that could really be applicable perhaps to this. I mean, probably the most complex version that would be just a home purchase to be honest. If you got the right documentation done up-front then you could potentially find a very efficient smart contract to deal with escrow and things like this. But I think it’s an interesting thing that a lot of people, both lawyers and technologists, who are continuing to explore a really important part of this FinTech ecosystem that people are trying to create. Yeah, IoT, right? The Internet of Things. With wearables and things. I can see for example, say health insurance policy, where there’s a smart contract tied to that, where if you exercise a certain amount number of days, or if you use certain things, then your policy comes down. Yeah exactly. If you drive… So they are already doing this with cars, right, they’ll put a device to measure your speed and everything on your car as long as you’re a safe driver, then your insurance premium comes down. A lot of those aren’t officially smart contracts yet, but you could see the method. Totally. You get the big data analytics, you get the AI and machine learning on the backend of that. It makes it very easy for that to be executable. Like a thousand little contracts. Basically. So the lesson I take from that, before we move on is, if and when that happens, and wearables are reporting to my health insurance provider, and that will impact my health insurance premium, and, I will purchase a dog, and we’re gonna put a wearable on the dog and let the dog run around. People are actually doing that already! So there you go. Just kidding. That was a joke! This is the ethics side of it. That was a joke. We also have humour in our modules as well. Thanks. Additional Readings 2.2.3 Implications of Blockchain Technology
Well, blockchain sounds awesome, right? so what’s the problem? Well really no problem per se, but let’s consider some questions: Okay so first, from a business perspective blockchain is just another type of technology, but it’s not a panacea to all business problems. So it’s important that you have the type of business problem that lends itself to a blockchain solution. Now moving beyond that though there are other implications to consider. Blockchain has an impact on the environment, for example. Remember when we mentioned that blockchain is a distributed network, each node on the network is a computer that requires electricity. Each of those computers is engaged in “mining” —effectively solving complicated mathematical problems to add blocks to the chain. These mining rigs require lots of electricity to both run the computers but also for the cooling to prevent the computers from overheating. So I have students that have a spare laptop or computer in their dorm room, and they have downloaded mining software and use electricity in their dorm 24 hours to mine, now, albeit mine very inefficiently, and they think the electricity is free, but of course that comes at a cost. So, for someone who’s layman, someone who’s not a technologist, you keep using the term mining, what does that mean? Basically, computers have to calculate a series of very complicated mathematical problems in order for them to be approved to add an additional block or information to this “blockchain”, and this is a level of security and access, a barrier to access, to prevent people from just adding things ad hoc onto the blockchain. Now, the ramification of this, however, is that takes an incredibly amount of computing power and will continue to take more and more computing power. And this is not just for Bitcoin, which is maybe the oldest or most well-known of the different types of cryptocurrency and blockchains out there. But for all the other different types of blockchain projects that have sprouted up, each of them requires some level of what they call mining. So now we have these huge mining rigs or farms out in random places in the world that – all they do is have these banks of computers that are basically calculating these series of mathematical problems in order to add more and more blocks to whatever blockchain they are working on. Okay so back to the environment, and the implications for that, you may be surprised to learn that to mine Bitcoin, a 2017 estimate stated that mining Bitcoin exceeded the electricity production of 159 other countries individually. So they say 30 Terrawatt hours – whatever it means. It means a lot of electricity. Right. And that’s only for Bitcoin, you can see that the electricity consumption would be much, much higher if it also included the mining of other types of blockchain. I realize this is a FinTech ethics course, so why should we be talking about the environment? So, from my standpoint, this is a really interesting and super, super important point that many of you maybe don’t really think about. When we talk about the implications of these technologies, in this course, or if you’re just reading about them, we are typically talking about the person-to-person kinda transactional cost that they maybe have. So, loss of privacy or access to finance, and those are super, super important. But what we also have to think about, and what we hope that you think about, is the broader social and physical – even geographical – implications of these things. When we include technologies like this, when we introduce these technologies, again, getting back to this concept of cultural lag, the technology has far outpaced our understanding of how to really deal with that technology in our real lives – in terms of its implications for the natural environment. So there’s good and bad examples of this. So, in some places, in Canada for example I’ve read they are taking old abandoned sawmills and lumber industry that has been shut down, or whatever, and they are retrofitting and re-fitting those large facilities into mining farms which – for some people is good – means more jobs, maybe brings income in there. But there’s a lot of negative ramifications as well. So again, noise pollution is very serious, so a lot of people in those communities are complaining about the noise. There’s obviously the electricity consumption. So the vast majority of mining farms are in China, and the vast majority of electricity from China comes from burning coal. And so there are very serious ramifications both now and in the future for things like that. But on the flipside, as culture kinda catches up to the technology, we are also starting to think more creatively about where to put these large institutions. So for example, again, in Canada, I’ve heard, I haven’t really seen this in use but I’ve heard, that they are actually trying to use the heat that is generated from these mining computers and to heat industrial complexes or maybe even other types of buildings and homes. So. Okay, so there’s some mixed uses or mixed purposes of having these locations based on these farms. Yeah, from a technology or FinTech ethics standpoint, I’m curious on your perspective, should a technologist or inventor or whatever, a bank, should they have to, or should they even want to, think about the environment implications or should they just be focused on the technology in the business model that they have? So that’s a great question and I think, a really fundamental question that we shouldn’t just be considering in our course, but in a lot of different domains to be frank. I think, I think science is telling us that we are at the precipice of some really fundamental changes that are happening to the world, well, have been happening to the world, and I think if we try to silo ourselves off and say – what I’m doing is not directly related to that – I think we can collectively find ourselves in a place that we didn’t intend to be. So, I think, irrespective of industry, I think the impact that industry is having on the environment is important to consider. Yeah, there are things that we are gonna talk about in later modules. You’ll hear us talk about some positive uses of this technology. So again, it’s not just about currency. Blockchain can be used to track diamonds, to make sure that they are not conflict-diamonds or blood diamonds as they are sometimes called. To track people in terms of either people that don’t have a government ID like refugees or people that perhaps have been – migrant workers, who potentially are at risk of human trafficking and slavery. So, there are a lot of very positive utilization of this technology and even utilization that has nothing to do with currency whatsoever. But I think one of the things that we – you know, as David was just mentioning is we want you to constantly think about: what is the balance between introducing these new technologies and the positive ramifications for change, the disruption as Silicon Valley would say, to these markets, to the financial industry. But, what are also the unintended and possibly negative consequences that can come from these things? Not only now, but in the future, because if we are not thinking about those things, then by time we get to that point and we see them right in front of us it might be too late. Additional Readings 2.3.1 Applications of Blockchain Technology
So, now that we have a better understanding of what blockchain is and some general idea of its possible uses, it’s probably becoming clearer why people are both excited and concerned about the technology. From a trust and accountability standpoint, the anonymous nature of blockchain means that user data and privacy are better protected, at least within the system. But in an ironic twist, blockchain based markets are also where stolen customer personal data is bought and sold, because law enforcement often have trouble identifying the parties involved. And in a more commercial context, some are concerned that the unaccountable structure from blockchain based products, like ICOs for example, leave investors, and even in some cases the public at large, vulnerable. I think we are only just beginning to understand the incredibly beneficial aspects of blockchain technology. But from a cultural lag perspective, we also realize that we probably don’t yet understand the full extent of the challenges that will arise from its use. So let’s look further at a few examples of how blockchain can be used for both good and bad. First we will discuss a really exciting case about the dark web marketplace Silk Road, which used blockchain, and in particular, Bitcoin to create one of the largest marketplaces for illegal goods the world has ever seen. This Silk Road marketplace was like eBay or Amazon, but for illegal drugs and weapons. How could such a marketplace exist, you might be asking? Well it was hidden on the Dark Web. So before we get into the case, let’s first take a moment to discuss what the Dark web is. 2.3.2 Dark Web and Tor
Think of the internet as an iceberg in the ocean. The part that is visible to you and I, is the “surface web”, which consists of the indexed pages on the internet, such as Google, or things you might find on Amazon and Facebook. Then there’s the deep web. The deep web is a subset of the Internet consisting of pages that can’t be indexed by search engines like Google or Bing. Pages that require membership falls under this category, so like online banking, your company intranet, and the very page that you are watching this web lecture on. Then, there’s the dark web, also called the “dark net”. This is a further subset of the “deep web”. None of the content can be accessed via a normal Internet browser, instead, you need a special cryptographic software, such as The Onion Router, also known as Tor. Tor is a free software, initially created by the US Department of Defense and the US Navy in the 1990s for the purpose of secured communications. The name itself is the analogy of an onion with lots of layers, layers upon layers – as it offers anonymous access to online resources by passing user requests through multiple layers of encrypted connections. Therefore, you can think of the software essentially as a digital invisibility cloak, hiding users and the sites that they visit. And it is this anonymity of the dark web, coupled with blockchain’s relatively anonymous and decentralized nature, that laid the foundation for the infamous marketplace Silk Road, which we’ll introduce next. Additional Readings 2.3.3 Case Study – Silk Road
In February 2011, Ross Ulbricht, under the pseudonym Dread Pirate Roberts, created the website platform Silk Road, where people could buy anything anonymously and have it shipped to their home without any trails linking back to the transaction. Named after the historical trade route that connected Europe to East Asia, Ulbricht founded Silk Road with the desire to create a marketplace free from taxation and government. The clandestine online marketplace, was largely made possible by the combination of widespread adoption of bitcoin and the invisibility of the Dark Web. Combining the anonymous interface of Tor with the traceless payments of digital currency bitcoin, the site allowed drug dealers and customers to find each other in the familiar realm of ecommerce. It functioned like an anonymous Amazon for criminal goods and services. Silk Road gradually developed to look similar to traditional web marketplaces with user profiles, reviews and more. And what started out focusing on drugs, soon included other products, such as firearms. And, although the authorities were aware of the existence of Silk Road within a few months of its launch, it would prove challenging to crack down the website and reveal the true identity of its founder, Dread Pirate Roberts. In June 2013 the site reached nearly 1 million registered accounts. Thousands of listings featured all kinds of drugs, prescription medication, weapons and more, turning its founder, the 28-year old libertarian, into one of the world’s biggest drug kingpins. From its launch on February 6, 2011 until July 23, 2013, over 1 million transactions had been completed on the site, totalling a revenue of almost 10 million Bitcoins and about 600,000 Bitcoins in commission. That involves, like 150,000 buyers and 4,000 vendors. At Bitcoin exchange rates in September 2013 that was equivalent of 1.2 billion USD in revenue and 80 million USD in commission. In early 2013 a New York-based FBI team, Cyber Squad 2, had started their investigation of Silk Road. They were trying to crack the encrypted Tor network that Ulbrich was hiding behind. And like other law enforcement agencies, they were having a hard time. Even using undercover agents to try to get access to Ulbricht but they were all struggling to break the case open. Finally, through a warning note on Reddit, the cyber squad was able to find a code which was leaking an IP address, pointing to a facility in Reykjavik, Iceland. This further enabled them to create a replicate of the entire Silk Road’s system allowing them to see everything and Dread Pirate Roberts’ every move. They read through his chat logs, followed the main bitcoin server showing all vendor transactions, and even learned how he had ordered several assassinations on people who had tried to blackmail him. Eventually, an IRS investigator was able to connect Dread Pirate Roberts to Ulbricht, through an old post on an open forum where Ulbricht had asked a question about the encryption tool, Tor. Through that question Ulbricht’s personal email was revealed, which showed his full name. So, what happened next was straight out of a movie. While Ulbricht was in a public library in San Francisco, agents from the US government distracted him by staging a fight. And when he turned away looking at them, other agents grabbed his laptop and were able to secure the information – connecting Dread Pirate Roberts to his account. On the computer they secured a mountain of evidence. A list of all the Silk Road servers, 144,000 bitcoins, which at the time was worth more than US$20million, a spreadsheet showing Silk Road accounting, and diaries that detailed all of Ulbricht’s hopes, fears and aspirations. As a result of all this, Silk Road was shut down and Ulbricht, the pioneer who opened the door for drug sales to flourish in cyberspace, was subsequently sentenced to double lifetime in prison. In court, the Judge echoed that what Ulbricht did was unprecedented and in breaking that ground as the pioneer, he had to pay the consequences. Anyone who might consider doing something similar, needed to understand clearly that there would be serious consequences. And since then, similar marketplaces have been launched all over on the dark web. Some have outright just stolen their users’ bitcoins, others have been successfully shut down by law enforcement, but still some others operate in some corner of the dark web although none to the sheer magnitude of the Silk Road. Additional Readings 2.3.4 Case Study – Silk Road: Subjectivity of Ethics
Now I love this case, because it’s like straight up out of a movie, right? This 28-year-old guy, who’s seemingly very, very normal. Yeah. Neighbours didn’t have any idea what was going on, was leading, in many ways, what was considered one of the largest marketplaces for illegal behaviour that the world had ever seen before, making, you know, in total commerce was worth billions of dollars. So what do we learn from this? Well this is, that’s an interesting question. I think, there are, people get arrested all the time, right, for doing illegal behaviour, selling drugs, you know, all this kind of very similar things that Mr. Ulbricht has been charged with and convicted for, but what’s so special about this case, as it relates to ethics in general, but particularly your FinTech ethics in particular? Hmm, well a couple of things that immediately come to mind are the fact because the nature of these types of crimes, related to FinTech, has become increasingly cyber, the ways that law enforcement now have to police these crimes is also becoming increasing cyber, so a lot of the tools that Mr. Ulbricht and other people that were within that marketplace that they utilised in that dark web, law enforcement actually did use those same tools, right? So when they go undercover, for example, they’re not literally going undercover where they’re changing they’re identity or the way that they look, but they’re creating user names and other things, profiles, so that they can kind of infiltrate those market spaces, which again, could be remotely from somewhere in Wisconsin, for example. Sure. And talking to them in San Francisco or wherever he was. And you know, personally, in some of the law enforcement work that I’ve done, it’s the same thing, right? So a lot of the investigative work that you’ll do is now sitting in front of a computer trying to put together financial documents and transactions and things to kind of identify where the various actors are. The second thing that immediately stands out is, from an ethics standpoint, I find this really, really fascinating, because here, his stated mission was actually moral in nature, right? So Mr. Ulbricht was essentially trying to create a marketplace, so he’s libertarian, right? And believes that the government and regulation is inherently evil in some ways. That is what he claims, and so he wanted to create a marketplace that was free of these types of restrictions. – Government interventions. Exactly, right? And the way that he described that the government should not have a monopoly on violence, for example, especially in terms of drug trafficking and whatnot. He believed, or least he claimed to believe, that this type of online marketplace would actually be inherently more ethical and more moral than the violence that occurs every day with drug trafficking into, say, the United States, right? And it think this goes back to the subjectivity of ethics and why it’s so difficult to have kind of a global or even consistent dialogue concerning what is actually ethical, and I think it’s gonna become increasingly hard in terms of the transnational and global nature of these types of, not only crimes, but just commerce. Yeah, so those are really interesting questions for me. I think kind of piggybacking a little bit on some of things that you’ve said, I thought this was really fascinating because there have been large-scale drug, you know, sellers in the past, so that actual aspect of the crime itself is not necessarily too unique historically, but the fact that he was able to rely on cryptocurrencies, particularly Bitcoin, to facilitate the transactions. – Yeah. – In the completely digital space, which created the safety that you’re talking to, and raises questions of anonymity, and privacy, and the use cases of certain aspects of this technology, which I think is also a component, as well, and is worth considering in the context of our course. Yeah, it’s only a matter of time, I guess, before the next iteration of narcos, or whatever it is gonna be, the Bitcoin, or Silk Road version of this where they’re gonna have to explain how this entire kind of network of illegal behaviour has kind of gone crypto. Hmm. Additional Readings 2.3.5 Case Study – Silk Road: Cultural Lag
– Okay, so getting back to the conversation about Silk Road are there certain aspects of how these technologies are being utilised that kind of bode, or can help us understand what they’re gonna look like going forward? And, let me just relate it back to one of the key principles we’ve been talking about is the idea that once these technologies are out it’s very, very difficult to pull them back. And, there’s often this slippery slope kind of race to the bottom perspective. So, if you look at it from a kind of a corporate regulation standpoint companies were created and then later on because people were seeking privacy they would go out to these island nations, the Cayman Islands, the BBI, and then they would kind of outbid each other by trying to be more private and providing less information. And so, again, while that attracted a lot of legitimate business that also kind of increased the opportunity– – To abuse the system. – Yeah, legal forms of abuse that have created problems globally now. So, are we seeing this, are there other examples of this that kind of predict what this is gonna look like in the next iteration? – So, I think we’ve already seen at least one iteration, post Silk Road and one of the things we mentioned is that one of the kind of drivers of allowing Silk Road to kind of grow to the size that it was was the use of Bitcoin as the medium of transaction. And, we believe Bitcoin and these kind of cryptocurrencies provide some of level of anonymity, though the block itself, the ledger itself is exposed. And, people can see the transactions that are happening, the actual users themselves have some level of anonymity as opposed to you using your credit card and being able to immediately identify who you are. And, maybe the next well know version of this is Monero, which is another cryptocurrency, an alt coin, an alternative cryptocurrency that has developed, has grown quite rapidly the last few years. And, one of its key characteristics is that it’s even more anonymous than other cryptocurrencies. – That slippery slope. – Again, there’s potential slippery slope there. And, we see this, and perhaps one example an example in North Korea which it reportedly is used, Monero maybe potentially mining Monero to circumvent transactions in the international financial system because they’re subject to a variety of UN sanctions and restrictions from accessing traditional financial markets at the moment. And, one way they are perhaps circumventing that or trying to get around those is the use of these kind of more secretive, less accessible forms of cryptocurrency such as Monero. And, there’s a lot of reports that they’re using that as well. – Okay, so Bitcoin was utilised within Silk Road primarily because it was largely anonymous. But now, we’re seeing people leaving Bitcoin to go to something like Monero because it’s even more anonymous. – Potentially more anonymous. – Potentially, and now we’re seeing governments getting in on the game. And, these are governments that oftentimes are maybe within– – Maybe less mainstream. – Yeah, less mainstream, oftentimes kind of tied to say terrorist financing or other kind of globally sensitive political topics. I find it somewhat ironic, first of all, that you would have the growth of this next iteration flowing out of the same principle anonymity, but it does make sense especially because when you have this race to the bottom or slippery slope that’s the way it goes. It continues going down. But, I also think it’s interesting how when you look at the kind of moral underpinnings why the founders of cryptocurrencies and Bitcoin in particular, why they created those currencies in the first place it very much, like all Ulbrecht and Silk Road was in and of itself kind of based on moral principles, the idea that you wanted to decentralise the marketplace. You wanted to democratise finance. And, in many ways allow people to bypass governments and in current forms of currency, right. And so, it’s interesting that very much like the Silk Road, and it’s not to say that all these uses are bad, certainly, but it is interesting how what was initially perceived as a moral, at least partially a moral conviction is now in some ways being, again, I don’t wanna say misused, but now being utilised in ways that perhaps weren’t initially anticipated. – Sure, and so that’s really interesting because I think if you talk to kind of visionaries who kind of have a real strong view about the role of cryptocurrencies it goes right to your point about they imagine, many of them imagine a world where actually fiat currency is replaced by cryptocurrency as part of that. Because, fiat government is tied to, or fiat currency is tied to governments and central banks that that mechanism they feel is increasingly archaic. – Inherently oppressive. – Could be, could be. And so, going to more transparent system, a more distributed system of cryptocurrencies is kind of, that’s what they think the future will be. Like you’re right, there’s a great deal of irony there because now not only do you have governments trying to regulate it more they’re also getting involved in the use and production of it as well, potentially. And, there’s these kind of these minor examples of governments who have come out and said, “Hey, we may want to try to kind of issue “our own kind of cryptocurrency.” And so, there’s a great deal of irony there. Additional Readings Explainer: ‘Privacy Coin’ Monero Offers Near Total Anonymity. (2019). New York Times . Retrieved from: https://www.nytimes.com/reuters/2019/05/15/technology/15reuters-crypto-currencies-altcoins-explainer.html Jardine, E. (2018). Privacy, Censorship, Data Breaches and Internet Freedom: The Drivers of Support and Opposition to Dark Web Technologies. New Media & Society , 20(8), 2824–2843. Piazza, F. (2017). Bitcoin in the Dark Web: A Shadow over Banking Secrecy and a Call for Global Response. Southern California Interdisciplinary Law Journal , 26(3), 521–546. 2.3.6 Case Study – Silk Road: Trust and Accountability
For me, the first question is, you know, we talk about this thing called the dark web, it sounds evil. Is the dark web, is that an evil thing? What do you think, Dave? Yes! No. I think, it does show one of the key things that we’re gonna talk about as we get into the ethics of financial technology, is the way we define these things, the way that we describe them, even how we name them, will color people’s perception of them. So our bias towards something can be projected, not only in the code that you create for, let’s say, AI going forward, but also in terms of, again, just the way we characterize these technologies. So clearly this term dark web, was probably put forward by individuals who wanted this to be perceived as primarily a negative thing, perhaps from a policing or national security standpoint. But, the reality is, as David made clear, these technologies were actually created by the US government for secure communications between various elements of the US military. And, there are so many aspects of these technologies that are utilized every day in order to protect us and provide us with privacy. This is one of the major dichotomies that we have not only in terms of FinTech, but broadly in terms of regulating privacy and information in general. And this is something that has been going on for quite a long time. Because, when you talk about ethics, most people when they talk about ethics, primarily focus on what is legal, they focus on the law. On the one side you have lawyers like us, who teach people about the hardline rules – black and white rules, about what is acceptable and what is not. And those have been largely defined by society through the codes and laws that we have in place. On the flipside, you have the moral, more ambigious, sometimes subjective aspect of ethics, where this can be related to culture or history or even religion – so many aspects of culture that built into what is perceived to be acceptable in society. And, governments – the pendulum of regulation swings back and forth, in terms of how much to regulate, and then how to back off that regulation. So, to use an example, I think if you were to go to someone and you would say do you utilize communication tools like WhatsApp for example, and do you find that those communication platforms are valuable because they encrypt the communication. I think most people would say unequivocally, yes, of course. Right. If you were to say I provide you this software, but, someone from the NSA or someone from the police is going to be listening to all your communication and documenting that communication – I think most people would have averse, visceral reactions to that. So, we want that for ourselves in terms of privacy and ownership of our own data, control of what the world knows about us. But then the flipside is, there are very valid concerns in terms of safety, in terms of national security, and so you’ll see scenarios where, like the San Bernadino shooting, where big segments of the population – even though they for themselves would advocate for privacy and security – were simultaneously asking Apple, hey you gotta jump on this, you’ve gotta crack this phone so that we can ensure these types of attacks don’t continue occurring. And I feel like, this is where we are right now in terms of this dichotomy, this paradox, in terms of privacy for ourselves and the broader social good. So can we, can a single country manage that debate? Absolutely not, and this is the issue. Again, if you go back to regulation as an example, whether it’s trade, whether it’s financial regulations – even contracting. Simple things like contracting. There are challenges when initiating these types of transactions and legal relationships in a broad cross-border standpoint. And when you especially from anything that’s related to technology, especially if it’s on the internet, you’ve got servers that are hosted in multiple countries, you’ve got pretty much everything going through the US at some point right now. And you have some countries like the US that have a very broad mandate in terms of extraterritoriality to their law enforcement, where they will go into another country – they will actually nab people, very much like executives of Chinese companies that have been nabbed. Not even on US soil related to what the US government views as their right to enforce regulation. And then you have other governments that are completely stand-off and don’t even have regulation in these standpoints. Now, another example would be here in Hong Kong, very very small place, but it is a finance centre and a FinTech centre. And, a lot of the data that is here is actually hosted outside of Hong Kong, and so, the very aspect if you set up a bank account, if you click on, you know, iTunes, and you agree to your data being collected, what you may not realize is a lot of the times that data is actually stored elsewhere, so you have multiple privacy and data ordinance that regulations that are going to apply just to that one subset of data. Additional Readings 2.3.7 Case Study – Silk Road: Privacy
So that’s a great explanation – I think a great way for us to start and thinking through the topic. I guess it fundamentally gets to a core question: do people, that use these technologies, be it the dark web, the deep web or just normal everyday applications that everybody uses, do they have a fundamental right to privacy in their use. Or, by virtue of saying: hey I want to use this application, are we basically saying I am giving up some level, some measure of privacy, and is that why that pushes people into using things that are below surface internet, be it the deep web or the dark web? This has been a question in terms of the right to privacy – it’s not a new question – several centuries old question that goes back to deeply held moral and legal beliefs in terms of the rights to privacy. So a lot of major legal questions including abortion, and other things around the world actually get back to the same question of right to privacy. What right do I have to engage in an activity within my own home as long as I’m not harming other people. And this is just an extension of that, where this data is being projected publicly and it’s a really complicated issue. Because, on the one hand, when you say the right, well, first, there has to be a granting of that right. There either needs to be a legal principle for example, within a constition or within the law that says you have the right to this particular thing – in this case, the ownership or control of of your own data. Then you may even have a higher moral right, so kind of an Aristotle or even religious right to privacy. Say, I’m an individual and therefore I have the right to control who I am, my own image, my own likeness, the way I’m projected to society. But then, beyond that, you then have those kind of daily ticky-tack opportunities that are contractual in nature where we often give away these rights – and we agree to, not a violation of privacy, but certainly limitation in terms of our privacy and our own data. At least eroding our privacy. Yeah exactly. And so a great example of this is, just recently in one of my classes, I had a number of students sit down and read through the terms and conditions that they had to click to accept to use a particular, very well-known app on the phone. You can say it. Well, I don’t want to put them on the spot. And, you know, that utilizes photographs, and to every student had never read these terms and conditions, though all of them were using it. They all use it. Yeah exactly. Almost all of them are using the application, none of them had ever read the terms and conditions – and as we went through it clause by clause there was many things that surprised them. Particularly about the ownership, not necessarily the ownership, but the use of their data, and I think this will become a broader issue particularly when it comes to financial data as well. Yeah, and we haven’t really gotten into AI and facial recognition software yet, but just imagine: we have potentially thousands and thousands of images of our face, of our facial expressions, that are out there now – that we have provided to a public, well, it’s not supposed to be public, but essentially to apps and other websites that we are giving them the right to publish these things often very publicly. And when you get into things like deepfakes with video technology that can now take images from an app, say like Instagram or Facebook, and then actually alter them in a way that creates videos that are very life-like, that are very realistic. This is where I think, several years into the future, I think people are really going to question why they were so willing to put images of themselves on the Internet. There is one interesting side-note, again, not to bring this back to parenting, but I have talked to a lot of parents in terms of the way they utilize or allow their children to utilize smart phones within their personal lives. Right. This is something that I think we are all still kinda wrestling with, because we don’t understand the implications of this. So one of the things that my wife and I decided to do is to never, or at least not for an extended period of time, put photos of our children online. And the primary reason goes back to this concept, this fundamental right to privacy. Who has the right decide to put your image publicly on the Internet. And so, the example that I provided in the past is: imagine if you’re going to a job interview at 21 years old, your first interview, and your potential employer has access to 10,000 images of you from the time that you were born to the time that you graduate – and you never consented to that, you were never asked whether or not that was a good or allowable, but you put it on there. And again, this is not prescribing a moral solution to other people, but this is an example of how we as society now have to go back. Now that the technology is out there, we now have to, from a culture-lag perspective, we now have to go back and re-define how we are willing and content to engage and utilize that technology. Additional Readings 2.4.1 Case Study – Blockchain and Foreign Remittances
For our next case, let us tell you a little bit about Hong Kong, where both of us have lived for approximately 10 years. As many of you know, Hong Kong is dynamic, global, and one of the most interesting cities in the world. A part of Hong Kong’s story that most casual observers are not aware of is that embedded within Hong Kong’s cosmopolitan make-up are hundreds of thousands of women that provide childcare, home care, and other household duties for many of Hong Kong’s families. These women are designated as Foreign Domestic Workers, but usually referred to as “helpers”, “a-yi”, or “Aunty”. There are approximately 400,000 of these women working in Hong Kong, most hailing from the Philippines or Indonesia. These women are generally paid around US$570 a month or roughly US$ 7,000 a year, most of which is remitted back to their home countries to support their families. The reality is that most of these women work really hard for a salary that you and I may not consider that high, but that salary is almost doubled the GDP per capita in the Philippines. And in the aggregate, these remittances by overseas workers, according to World Bank data, account for approximately 10% of the Philippines’ GDP. So individually and at a national level, the money really adds up and the impact of these wages is a very big deal. Now how does this money in Hong Kong make its way to a family living in a village somewhere in the Philippines? Well, besides being one of our course instructors, we are really fortunate that David Bishop is one of the world’s foremost experts on issues related to domestic helpers and how to protect them from exploitation. So, let’s hear it from him about the issues these women face when sending money back home. So, for you out there, you might think that if you were going to send money, maybe you have a bank account and you would just do a bank transfer. Simple, problem solved. Unfortunately, for the tens of millions of migrant workers around the world, this is usually not possible, since they are generally unbanked on both sides. Meaning, the foreign domestic workers in Hong Kong, many of them don’t have a bank account here in Hong Kong and their families on and other side, people they are sending money to, they typically don’t have a bank account either. So the workers receive their wages in cash and they have to figure out how to get that cash from Hong Kong to their family in a remote village somewhere perhaps in the Philippines. To fill such needs, money remittance companies have sprung up all over the world, the most famous probably being Western Union. And for decades this is how people transferred money. As part of this process, there are two important things to note, which might not be apparent. First, there is a physical component when remitting money. A worker has to physically go to one of these locations to actually hand them cash. Then on the other side, there is another physical location, where the receiver has to go to pick up the money. So both sending and receiving is a very time and labour-intensive process, due to standing in lines, walking long distances, and perhaps waiting for and using public transportation, which comparatively might not be cheap. In addition, many of these workers only have one day off a week, usually Sunday, so much of that day could be wasted trying to send money home. Second, is an issue of financial literacy. These money remittance companies charge fees that you and I may consider excessive, sometimes as high as 8 or 9% per transfer. Additionally, currency conversion fees are typically not competitive. So, even if a remittance company has a low rate for sending money, they will likely make money on the currency-conversion, like when converting from, say HKD to Philippines pesos. On top of all that, sometimes remittances can take time, at least a few days if not longer. I’m not saying these companies shouldn’t make money for providing a service, but frequently their customers are not really that tinformed or have limited options. So a natural question is: what if that lost friction or time, or unnecessary fees can be avoided or at least reduced? For many, the answer to that question, or at least an important component to that question, is the use of blockchain technology. Today there are a number of remittance services that are trying to employ some level of blockchain to minimize many of the frictions that we have discussed, by promising to make remittances more efficient, secure and/or affordable. As these innovators pressure incumbents, there will be a shift, first a trickle but then a wave, as users become comfortable adopting new technology, bridging any cultural lag and learning to trust new advances in technology. Overall, the breakthrough in blockchain is really exciting and will be a game-changer for many of these workers as well as the millions of people around the world that transfer money daily. And in Module 6, we’ll be looking into a really cool blockchain remittance service company, called BitSpark, that was formed right here in Hong Kong So stay tuned for that. Additional Readings Module 2 Conclusion
We believe blockchain has the potential to be a revolutionary technology, like the Internet 20 years ago. It’s truly exciting to consider its possibilities. But like many such technologies, there are implications of their widespread use that are not always initially apparent and difficult to address once the technology has become widespread. In such situations, it’s helpful to use frameworks to consider risk and implications. One such framework that we’ve found to be meaningful is “The Blockchain Ethical Design Framework”. This Framework was written by Cara LaPointe and Lara Fishbane and was published by Georgetown University’s Beek Center, which focuses on Social Impact and Innovation. We’ve included a link to the report below. The framework they use is focused around six guiding questions when using blockchain as a solution: How is governance created and maintained? How is identity defined and established? How are inputs verified and transactions authenticated? How is access defined, granted, and executed? How is ownership of data defined, granted, and executed? And finally, how is security set-up and ensured? These are important questions and such questions can assist each of us to think deeply about the impact of blockchain technologies— either as a user or if we intend to deploy blockchain as a solution to solve a problem. More broadly, this course is ultimately about asking questions and having the desire and courage to do so. We will return to many of the themes that have emerged as we have explored blockchain and will continue to ask such questions as we consider other technologies and their applications in later modules. Ok look, we think blockchain is really really exciting, really game-changing technology. And so from our perspective, we want you to think about blockchain in the same way that we think about the Internet 20 years ago. In the mid-90s, there was this advent of a new age of information, this thing called the Internet that everyone was exited about. And as a result of that, it gave us access to more knowledge and made knowledge more available to people than at any point in human history. Over the last few years, in the wake of fake news, echo chambers, etc. as the cultural lag is catching up, we now have realized that this technology has also come at a cost. In the next module, we’ll be looking at compliance, and regulation and rules. But the thing about law is that they are typically retrospective, the look backwards. Therefore, it is important for us, collectively, individually, but especially as a society, to proactively look ahead and think about what it is that we are willing to pay, what it is that we are willing to give up in order to have these technologies in our lives. Additional Readings Module 2 Roundup
Hey everybody. We’re back with our roundup for week two. We really appreciate all the active participation this week in the discussion board. There were several really engaging discussions and questions that came through in the forums, which is always really great to see. – Yeah, honestly, one of our hopes as we design the course is that it would not be unilateral. Meaning us just giving content to you all, but multilateral in the sense that you’re participating and engaging and pushing us as well, which really seems to be happening. Seeing this type of activity is really rewarding and gratifying for us, because we think this is how some of the best learning occurs. – Now we hope to continue that in the next module. As all of you are aware, in module one, we provided a broad framework, by which to analyse some of the core technologies at the heart of FinTech, and in module two, the first technology we really consider is blockchain. From the comments in the discussion forum, it seems many of you have been thinking about similar questions. We wanted to spend the next few minutes following up on some great questions and contributions that were made. We want to revisit some of the discussion questions that we thought were really interesting and compelling in a few different ways. The first one that really comes to mind, started off with a comment that RichardStampfle made about the idea of free markets, and that generated a lot of back and forth between a number of course participants including Jstout84 and Peter-NYC, and a number of other student participants which we thought was a really great interaction and the type of multilateral discussion that we want to see hopefully generate in the forums. Dave, what are your thoughts on this broader topic about this idea of free markets? How it operates in the context of some of the principles that we’re discussing in the course? – Okay, so we’re gonna solve one of the great questions of capitalism and economics here in the next five minutes. – Five minutes. Sure, this is how we operate. – Sure, no problem. No, I mean the underlying question, and first, maybe just as a side note, some of the comments are really really well thought out and so we appreciate the high level of engagement. It’s clear that you guys are pretty damn smart, so we really appreciate the engagement. It’s been really good. Making us think a lot. Free markets versus government regulation. This is the age old question from an economic standpoint, and I think when talking about FinTech Innovations, and especially blockchain, this is that intersection, where I think people really start to have significant disagreement. Certainly, within a government regulatory standpoint, that is the reason why most governments are weary of this type of, especially with crypto currency. Who is able to control it? One of the commenters it was Jstout84. I’m just gonna read this, because he had a really good quote from Thomas Hobbes. He said, quote, “People live nasty brutish lives. Always seeking to undermine each other.” Close quote, and he said is it too idealistic to think that completely free markets can function for the benefit of all, and I think again, this is the question for this course. People call it the fourth industrial revolution. I think from my standpoint it’s really just capitalism 2.0 or 3.0. Can we as a global society, come together in such a way so that these massive, currently disparate resources, can be shared in such a way so as to maybe not make complete equality, but make it so people don’t feel so disengaged, so separate. – Marginalised. – And then potentially which, obviously can lead towards violence and other types of challenges. I think that is the great question, that the original commenter was asking, and I think in module five, we’re gonna take this question a little bit further, and really look into that. A lot of the FinTech Innovations were started based on these broad questions of decentralisation of power, or democratisation of finance, and really about eroding these power structures that have existed for centuries. Sometimes a millennia, and the question that we have for you in module five is, will governments, will large institutional holders of power like banks, will they actually allow that to happen, and I think again, that is the underlying fundamental question. I think to answer the specific question you asked, I don’t think it’s gonna happen any time soon, but it is really exciting. I think the way I would flip it around is if FinTech works the way that some people think that it will, will they have any power to stop it, or is it just an eventuality, where cryptocurrency, various forms of decentralised systems, will make it so that the very concept of government or finance and stuff just becomes eroded and just transforms into something new. I think that’s really. Not anytime soon, but it’s a very interesting hypothesis. – I think maybe to piggyback on some of those points, ultimately it just comes down to a tension between where we think regulation plays versus where we think this invisible hand idea. That Adam Smith talked about, and I think the way we think about it, is they exist on a spectrum and that spectrum can change depending on which industry we’re talking about. Depending on which country we’re talking about, and which microcosm of the economy we may be talking about at a particular time, but do some new technologies that come into play, maybe some things we talked about module five, or things like smart contracts, or other things that we end up touching on, how does that help facilitate, less friction in transactions? Which is ultimately at the heart, I think of trying to, at least one school of thought is when we bring regulation into financial transactions and the economy, this is one school of thought. We want to remove transaction costs as much as possible, but at the same time have a level of fairness, and protection of certain players in the game as well, and so this is an interesting tension. Unfortunately, I don’t think we answered it in five minutes, sorry. – No, we probably didn’t. I mean again, you guys can answer these questions in the forum perhaps better than we can. It’s clear that you guys are really really highly thoughtful on these types of questions, but I do think it brings up, another thing that within the forums and within the questions you’re asking, does seem to come up over and over again. There’s this interesting dichotomy, or in Chinese, they say, (foreign language speaking) when these two opposing forces, that a lot of you have identified in this kind of FinTech space, and the idea is that on the one hand we don’t trust traditional financial institutions, maybe even governments, so therefore we want to, we hope that financial technology innovations push us towards a more democratised, decentralised system, and yet when we asked you who do you trust, it was. – Traditional financial institutions. – Traditional financial institutions, right, and the reason why, the underlying reason why, is because there wasn’t a track record and more importantly there wasn’t a regulatory structure that provided that kind of social safety net, the insurance, the other types of, basically that framework that ensures that if you invest in some cryptocurrency or other system, that your money will be safe, and I think this is such an interesting dichotomy that we’re running up against as a society, and it leads into for example, one of the things that we talked about this time, smart contracts. So we had people talking about how smart contracts could or would, or maybe even are influencing their lives, and so people, a lot of people touched on real estate transactions. They touched on related to real estate was the actual storage of documents, like deeds and other land records in government systems. What were some of the things that stood out to you in terms of some of the issues or ways that people saw smart contracts, becoming more relevant in our lives, that you think stood out? – In terms of the relevance of smart contracts? – Yeah. – Yeah, so I think smart contracts, it’s kind of almost like a buzzword. It sounds great. I think a lot of legal structures, or just law in general, we want to make sure that we get the right structure in place, before we obviously implement, and this is I think where, some of the potential issues could come, because if we don’t think about this comprehensively, then it can create potentially more problems than it solves or at least they kind of cancel each other out, and so that’s probably where the real concern is for me in particular. – Can I give a quick example of that? – Yeah. – One of the things that came up regularly in the comments was real estate transactions, especially the buying and selling of homes, and it was really clear that a lot of us are tired of paying all those middle man fees. Tired of paying for real estate agents. – Broker fees. – Yeah, broker fees. – Lawyer fees. – Lawyer fees. There’s so many people in the middle that are grabbing pieces of that transaction, so I think very naturally, because that is the largest investment that most people make, that is also the largest type of transactional fee that we tend to pay, and it’s really easy to look back and say, I’m buying this house. I’m taking all the risk. Why in the hell am I paying this money, this fee, to a real estate agent, who just unlocked the door for me? It does seem very natural. Now if you didn’t remember from my background, my original legal background was in commercial real estate, and let me give you the flip side to that coin, because I think this goes exactly to your point. The reality is that real estate agent, and agent is a legal term, right? That means a fiduciary relationship, which we talked about already. This idea that it is someone who is put in a position of trust, and therefore they have higher standard of trust and legal requirements, because they are meant to be there to help you and guide you. – Normally when we talk about agent, we talk about principal and agent. – Exactly. – Somebody delegating trust, authority, power to the agent, right? – Yep, exactly. – In this case the property owner. – Yeah, property owner, or the purchaser, you have to use an agent, legally, in many cases, in order to go through that process, and again it seems so unnecessary, and why would we pay that person for doing something so little, and just the one thing that I want you to look at going forward is, you’re not necessarily paying them that fee for what they are doing, you’re paying them that fee, in order to ensure they do something if it goes wrong, and this is the problem. This is why we have that same dichotomy, with the traditional financial system versus that blockchain based cryptocurrency. Whatever FinTech system, is you are paying the lawyers. You’re paying the title insurance company. You’re paying the brokers. You’re paying all of those people along the way to protect you if something goes wrong. 99.999, whatever percent of the time, nothing goes wrong, and so therefore it seems like that was a waste. – A wasted cost. – Yeah, exactly, a wasted transaction. You think oh my gosh, they got $1,000 for doing nothing. I can tell you through, as a former corporate lawyer, in the commercial real estate space, it is absolutely money well spent most of the time and I’ll give you a quick personal example. My wife and I purchased a home at foreclosure before we moved to Hong Kong about 12 years ago. We recently sold that home in 2017, and when we sold that home, we realised because the lawyers found it, that they had actually recorded the legal description of the land incorrectly, and because of that they actually had to go and find the original owner, get them to sign a new deed, a corrected deed is what it’s called to have the actual, the proper. – Proper description. – Description on there, and if they didn’t have that, then the buyer would not have been able to get all of their parties to line up. The mortgage company, the title insurance company, et cetera and we would have been stuck with a house that we would not be able to sell, right? So on the one hand it means that the original lawyers and stuff didn’t really do their jobs, but on the other hand it meant that because we paid these fees to these people, they were there to protect us, and I think again, this gets at the heart, I’m not saying that they’re worth all of their money. I’m not saying I don’t also feel the sense of anger when I have to pay someone a fee that I don’t really necessarily think they they deserve, and as a lawyer I’ve been on the other end of that. Probably received money that maybe I didn’t deserve in the traditional sense, but the reality is, the system is there specifically to deal with that dichotomy that we’re now facing. We want that protection and we need it, but now technology could give us extreme efficiency but with that efficiency comes less certainty and less protection, so the question is, how do we develop the efficiency, but also maintaining the protection? I think that’s really hard. If it’s self executing, that’s really hard. – That’s very difficult, and so I think at the heart of a lot of the technology we talked about, be it from blockchain or AI powered mortgage lending decisions or anything along those lines, I think that ability to have recourse is really at the heart of a lot of these things, when we talk about free markets, regulation, the role of the law. How does that play? Ultimately, at the end of the day, if something does go wrong, you want to be able to have some level of recourse. – Absolutely. – Ideally being able to talk to somebody. – Absolutely. – I think a large part of. – And the bigger the dollar value is, the more you’re gonna want that. – Want that, right? I think a large part of the issues that we face when it comes to new technologies is that up till now there’s no, nobody has really articulated a solution to what that recourse would be specifically, so if a particular, a block is incorrect, if whatever AI decision making that’s going on in a particular company comes out for whatever reasons, wrong decision, how, what is the recourse of the person who is impacted by that? – Yeah. – I think again, that cost, both in financial but in time emotion, is pretty steep for the average person. – Very significant, yeah. Going on to some of the other comments that you have in regards to blockchain, and the way that it could potentially impact our lives, there was some things that stood out to me that I hadn’t really thought about, especially in terms of my day to day life. Some commenters talks about traceability of products. So product liability is a very serious thing. You want to know that the food that you’re eating is safe or that the diamond that you’re purchasing is accurate, or whatever, like it’s described accurately, and there are some really interesting descriptions about that. Here in Hong Kong someone mentioned the milk powder which is sometimes difficult to trace, and there have been concerns over milk powder. – Baby milk powder. – Baby milk powder. There’s some concerns several years ago about the source of milk powder and what it contains, and maybe if there’s some traceability on that. Maybe in terms of fair trade, things of that nature. One thing that wasn’t mentioned, that I thought would come up, because this has come up globally, in certain contexts, is actually voting, and voting not in terms of cryptocurrency, which we did talk about, but actually voting in terms of governments. – Political elections. – Political elections, yeah. The immutability of the blockchain, does mean that theoretically, if you wanted to increase the number of voters, then the best way to do that is to not make them physically go to a polling location, but let them do that on a mobile application somehow, that didn’t come up, but maybe throw that back at you. Is that something you think that governments would potentially allow at some point? – I think the other thing that’s really interesting when it comes to blockchain and application. I think one or two commenters did talk about this, or alluded to it at least, is the idea of property records, or title. How do we track that? This is obviously super important for governments as well as homeowners like you were talking about in your own personal experience. I think in a lot of places in the world, particularly where database records are not as comprehensive as people would hope they would be or not as clear, one solution that people are hopefully pointing to is blockchain, and we see this in countries where that title record or the deed record is really spotty, in some senses, and if you were to be able to get that in place, then you would be able to clarify a lot of potential issues, and unlock a lot of value for people that own this property to be able to utilise it in different ways. The real impediment or at least one of the key impediments is actually doing that though is putting the right records in to begin with. – It has to be proper in the first place, otherwise you’re just gonna have immutable bad data. – Then going back to try to fix that, it goes back to what we were talking about. – I don’t think many of you perhaps realise how inaccurate a lot of real estate records really are, or maybe you do recognise it, and that’s why you’re suggesting this, but taking Hong Kong as an example, it’s one of the most modern economies of course, and as a Commonwealth country, with the legal system where it stems from the U.K. system, if you studied in the common law, or if you worked in a common law country as a lawyer, then a lot of your legal training, can get transferred over. The vast majority of it gets transferred over, but here’s the thing. Every single lawyer, even if you went through the Commonwealth system, has to take the conveyancing course, because the legal conveyancy. – What’s conveyancy mean? – Conveyancing is when you transfer ownership of real estate to another person. It’s so messed up here, that everyone has to take it. – They have their own course that they have to take. – Yeah, everybody, no matter how much experience you have. I as a former commercial lawyer, in a common law country, that has the same legal background, they said if you’re gonna get this transferred over, you still have to take this conveyancing system or course because it’s so different than everywhere else, and the rumour is, I don’t know if this is true, but the rumour is that if you look at the deeds. If you go back far enough, any property in Hong Kong, you could say there’s a conflict in terms of ownership, and again, I’m not saying that’s true. The point is simply to show that. – Even in sophisticated markets. – Exactly, where they’ve been keeping records a long time. – If you think about markets that are for whatever reason less sophisticated, then there will be a litany of greater issues to address. – It’s almost like the less developed it is the more likely it could work, because then you almost have a clean slate. – This is a real difficulty in a lot of places in the world when it comes to real assets. Particularly property, and how those are gonna be conveyed or sold or used and to get clarity on this would actually help a lot of these countries and their economies. – Oh yeah, a tonne, and so going back, we mentioned voting, although in the political standpoint, but voting did come up in this module, so we want to talk about that briefly. One of the, some of the more interesting conversations I thought were again, very clear that many of you understand this as well or better than I do, especially on the technology side, was about the blockchain, and from a governance standpoint, the voting mechanism, whereby blockchain is controlled. We mentioned that the majority of blockchain and cryptocurrency is typically governed under a one vote system a majority rule system, excuse me, and we talked about is that the best way to do it? What were some of the ideas or thoughts that came up for you? – I think the great analogies that usually come up when we talk about how blockchain is governed or how these different industry groups, because there’s all these different kind of communities of different blocks and chains and applications that are out there. The initial analogy is always corporate law and the principle majority rule. Besides probably your elementary school teacher using that to say what are we gonna do next? Are we gonna go to recess? Are we gonna eat our snacks? I’ll let you vote. Beyond that. – That wasn’t my elementary. She was a dictator. – But beyond that, corporations as a general rule, one share one vote, which we mentioned in our course, and this is something that somehow is ingrainable to the political side, of how a lot of nations govern, but also on the corporation side. Now obviously there ae exceptions to that and some of the commenters had some great points related to that with respect to, what would be the alternatives? – Yeah. – So we think of super majority. Voting in the case of special resolution. Usually require a super majority, which could be 75%, and then is that the criteria we want to use if we want to fundamentally change things, or other people talk about cumulative voting, where people can load up votes on a particular item so to speak and then contradict their votes, which would help smaller minority type voters, and these are really interesting discussions that have. When we think about how we want to govern these things, because we know at least in certain types of blockchain communities so to speak, that there’s some concentration of power by concentration of either the tokens or whatever they’re using to use as your vote to say how many votes you’re gonna have. – This kind of came up. In case you’re not going there, several people mentioned a quote, 51% attack. What is that? – Yeah, so 51% attack is a little bit different in the sense. It all falls under the umbrella of governance, but it’s a little bit different in the sense of pure voting. In the sense of either. When we were talking about how for example, bitcoins, if we use that as an example, there are a number of computers out there. Large oftentimes very focused special computers out there that are doing mining to calculate some sort of mathematical problem, and when that’s solved, it pops out a coin for you basically, right? The idea of a 51% attack, what that relates to is, unless somebody owns the networking power in that particular chain of over 50%. Once they get to 51%, they could actually change the records in the ledger. In the blockchain ledger, they could change the record. Now until you hit that threshold you normally can’t. This is what they were talking about 51% attack. Is that a risk basically? Certainly, in certain communities that is going to be a risk. Particularly ones that are not distributed, and there’s that side of it. Oftentimes on the voting side, which will happen, if minors for example, or mining problems, and as a reward for mining the problem, they get a cryptocurrency of some sort. Sometimes certain communities will use that as the voting metric. If you’ve popped out 100 coins, and you have 100 votes maybe, to decide how things are going to work out, if we’re going to go this way or that way on a particular problem. I think those are where those two things are linked, but I think in terms of the voting, if we go back to I’ve got X amount of coins or whatever then should I be able to vote, one share, or one vote per whatever I have, or should there be some other type of system in place and I think this is where the comments and the debates were when it came to that. – Yeah, and this actually relates to it may not seem, when we threw in the section about the environment and we talked about electricity, we probably didn’t do a very good job of making it clear how related these two parts are. Let’s just talk about that briefly. We talked about the electricity component, to talk about how the utilisation of new technologies can have broader implications, often negative, and so therefore we need to think about those implications but here’s the connection to the governance part which we didn’t really explain very clearly. The idea is if it requires a lot of electricity, in order to mine these coins and to gain some level of control it also means that the people that are going to have the most control and the most shares of that whatever the crypto or blockchain is are also going to be the people that have the greatest access, to the electrical system, and so what many of you may not realise is that a lot of the people that control big percentages of coins and other blockchain based systems are actually quasi governmentally connected either through personal relationships or actual government support, where they can just use massive amounts of cheap electricity, in order to mine these massive mining farms, where they’ll mine these coins, and so that means that you have, if a government, let’s say, or someone connected to someone in power, wanted to gain control over those things and they had that massive ability, then they could theoretically gain some level of control, maybe even majority control, and change the rules within that system and this is an interesting again, this paradox where it’s not like, one coin, one vote system, where all of those coins are distributed equally around the world. It’s actually those that have literal access to power in this case electricity, are oftentimes the ones that then get to write the rules for those things. – Yeah, so there’s definitely a connection between those things, and I think if you’re the average cryptocurrency enthusiast where you’re buying fractions of bitcoin or ethereum or whatever, these are things you generally don’t think about, but I think clearly in a macro sense they’re definitely sociopolitical and socioeconomic links that are definitely driving or providing the structure to the market that we play in. – Exactly. – In the context of blockchain, and some of the other technologies and themes that we end up talking about through the course, we often talk about remittances. Particularly overseas remittances. I think you had an interesting experience with that recently. – I did, so this is one of the areas again. Probably because my interest in migrant workers and trafficking and other things in that area, that I personally am really excited about from a blockchain perspective, the idea of someone being able to transfer money, peer to peer very quickly. Perhaps using a mobile device, instantaneously, with very very low fees. I’m super excited about that. Not possible for everybody yet, but I think within the next three or four years, certainly five or 10, it will be available to pretty much everybody assuming countries allow their currencies to be converted and what not, but so this week actually, just by happenstance, was in a position, where I was transferring money to a friend of mine in the Philippines, and so I had to physically go into the downtown portion of Hong Kong. I had to go to this place called the Worldwide House. I had to write out something on paper, in this really old form, which by the way they messed up anyway, and they actually took my name down wrong on the transaction form, even though they had my ID and everything, and it was a cumbersome expensive process, relatively speaking. Now the money made it there, and so I think if you think over the past several decades these remittances have allowed migrant workers to really spread out across the globe, and provide for their families and friends, really from everywhere. It’s amazing in that regard, so don’t get me wrong, but I can see what’s coming next, and I can think about as David, you said in the module, even if you can just decrease those fees by 1% or 2%. – Big impact. – You’re talking about billions of dollars going to the developing nations of the world. I think it’s really really exciting. What I did, I never do this, but I actually took my phone and filmed using a little vlog or selfie, whatever it’s called. Whatever the kids are calling it these days, and made a little video of myself. We’re gonna put that together, and send that out to you as well, so you can see what we’re talking about when we talk about the remittance process. – Overall the experience was. – I’ve done it enough times that it was to be expected. It was not good nor bad. The money made it there. I’m grateful for the process, but I’m really excited for the day when I can just do it on my phone. – Well that’s it for this round. Thanks again for your participation and contributions. We’ve thoroughly enjoyed reading the comments and discussing the ideas amongst ourselves as well which we frequently do after reading them, we’ll ping each other or talk to each other, see each other. – A lot of questions. I have no idea. What do I say? – These are really great insights, that you all are sharing which we appreciate. Now moving on in our next module, we’re gonna explore cybersecurity and crime, which will build on what we’ve already covered in modules one and two, so we look forward to seeing all of you again in our next roundup after module three. Module 3 Cybersecurity and Crime 3.0 Module 3 Introduction
– Welcome back. In this module we’re going to explore a really interesting part of FinTech that frequently ends up in news reports. Cybersecurity and digital crimes. The ubiquity of technology and our reliance on in daily life makes cybersecurity a really important and fascinating topic. – Now, I’m sure you’ve seen reports of hacks and personal information of millions being exposed. Or perhaps you’ve even been a victim of cybertheft or other digital crime yourself. Now as devices, accounts and other aspects of our everyday life become more interconnected, the convenience that we gain is also balanced by the necessity for cybersecurity. Now, for many institutions, cybersecurity is somewhat like the story of Sisyphus in Greek mythology. Now, if you’re familiar with Sisyphus, he was sentenced to roll a large rock up a hill that would roll back down after it got to the top at the end of the day. And this forced him to start over again, day after day after day. Now, similarly institutions are under near constant attack by cyber attackers, with new threats always appearing. So who is responsible for thwarting these threats and protecting user data? – And for all the benefits we believe FinTech’s rise will create, FinTech’s potential for good is also tempered by the potential for it to be used for illicit purposes. Given that, it’s really important for us to consider these risks through the principles of trust, accountability, proximity, privacy and cultural lag that have served as touchstones throughout the course. So in this module we want to explore topics of cybersecurity and digital crimes and their importance in considering FinTech through some movie-like, but actual true stories. So to get us started, we’re gonna look at a billion dollar bank heist. 3.1.1 Case Study – Billion Dollar Bank Heist
In February 2016, at Bangladesh Central Bank’s headquarters in Dhaka, something occurred that laid bare the profound weakness in the global financial system. When banks move money around the world, they use a system called SWIFT – Society for Worldwide Interbank Financial Telecommunication – which is a consortium that operates a trusted and closed computer network for communication and payment orders between banks. And today, SWIFT is used by over 11,000 financial institutions and more than 200 countries and territories around the world. And one of them is Bangladesh Central Bank – BCB – with its headquarters in Dhaka, Bangladesh. On a daily basis, staff members at BCB would go into a highly secured room with closed-circuit security cameras, log into SWIFT and dispatch payment orders with encrypted communication. 8000 miles away though, the New York Federal Reserve Bank is the gatekeeper of much of world banking, and hosts accounts for 250 central banks and governments – including the BCB. When the New York Fed receives a payment order, it follows the instructions and sends the money to the recipient. At the same time, it sends a confirmation back to the center, which in this case is BCB, marking the transaction completed. This process happens all around the world, every single day, with about $5 trillion dollars being directed via SWIFT. And the system is designed to be unbreachable. On Thursday, February 4th, 2016, 35 payment orders using the credentials of the BCB’s employees were sent via SWIFT to the New York Fed. 5 of them went through, but the other 30 requests were blocked as the Fed system had detected a sensitive word in the recipient’s address and therefore flagged those transactions as suspicious. The next day, a total amount of $101 million was successfully transferred from BCB’s account to several accounts in Sri Lanka and the Philippines. But in the SWIFT operation room in Dhaka, it was quieter than usual. The printer was malfunctioning, so none of the confirmation letters got printed. They didn’t think much of it, assuming it was a small mistake, and were going to fix it the next day. After spending hours on Saturday getting the printers to work, the 35 payment requests caught the BCB employees by surprise, and the SWIFT communication system was still not working. Assuming they were mistakes, the BCB employee tried to contact the New York Fed via email, phone and fax to cancel the transactions, but the Fed was shut down for the weekend. On the following Monday, BCB was able to get the SWIFT communications system working again. And it was not until then that they realized that the most daring bank robbery ever attempted using Swift had happened, four days ago. It would prove to be the most severe breach yet of a system designed to be unbreachable. It turned out that the hackers had installed malware on BCB’s servers that had sent the 35 payment instructions and which deleted any incoming notices of the SWIFT confirmation messages. And, when the Fed was back in business that Monday, BCB was able to reach out and ask them to block the money transfer – but, it was too late and the money had already been sent to the recipient banks. So they sent SWIFT messages to the Philippines bank, RCBC, but it was a public holiday in the Philippines, so they would not be read until Tuesday, February 9th. And by that time, the money had already been transferred out. Some funds were transferred to Sri Lanka and those funds were later recovered because of a misspelling in a word in the instructions, which triggered an alert at the local bank. But the 81 million USD that went to the Philippines was not recovered. That money were sent to four fake accounts at a small Manilla branch of a bank called RCBC. And from these accounts, the money was taken out and laundered through the Philippines casinos – never to be recovered. Now at the time, the Philippines casinos were not covered by anti-money laundering laws, and so it was a nearly impossible to track the money. As of today, most of the money is still nowhere to be found. Other similar cyber crimes have been reported elsewhere, such as in Vietnam and Ecuador, and other cases will likely come to light – and the hackers, however, have yet to be identified. Additional Readings 3.1.2 Case Study – Billion Dollar Bank Heist: Proximity
So if I’m thinking like an old school movie where you got a cowboy and he goes in and he’s like robbing a bank, you know, he can only take away what he can physically carry. In fact, that’s kind of the component in a lot of these movies with bank heists and stuff is like the physical weight of the money is actually a challenge and they have to like kind of balance the risk of getting caught, and getting away… – Speed. – Speed, all those things, right. So it’s really like an entire action-packed scenario. But what you’re saying is like in this type of a scenario, they can shoot the money all around the world to different accounts, maybe some of them land, maybe some of them don’t, and then they can kind of pull the money out of the successful transactions and in this particular instance, unlike a physical bank heist where you might walk away with a few thousand dollars, maybe a few million dollars, most of this was unsuccessful and they still received $81 million. – That’s correct. So if you think about a traditional bank robbery, you know, you only get one chance at it, really, at a time. But… – Unless they’re really bad. – Unless they’re really bad. But in the cyber heist situation, even in this example of the Bangladeshi kind of central bank heist, there was multiple 30, 40 plus instructions that they kept trying to get through and so you can think it’s almost like a spam. – So in a lot of things that we talk about in this course when we talk about ethics, a lot of decisions about whether or not to do something for a moral reason comes down to this concept of proximity, which we talked about earlier in the course. And in this particular instance, it seems like let’s say you’re going to walk into a store and you’re gonna steal a candy bar. You have to worry about the physical proximity of walking past the person and the psychological pressure of going against kind of these societal rules. But in this particular instance, you have some guys from some random country somewhere that they’re not gonna see the outcome, they’re not going to understand who they’re harming. It’s gonna be disparate, right, the opportunity for catching them is very, very low and as you said, the cost per infraction is minor. So once you figure out the code or the script or whatever necessary to kind of – You can just keep pinging. – Just keep doing it over and over and over again and then if something hits, you could get $81 million. – And to your point, because of that, in the situation you explained, because the distance geographically that you have from location of the crime to where this person might be, it removes that kind of connection with humanity which then makes it easier to perpetrate certain illicit activities. – So that then brings up another question. Cause it’s not just the psychology of moral decision-making but it’s also the very practical elements of enforcing the laws. – Yeah. – I mean, it must be that it’s incredibly hard as FinTech gets better and more efficient, if the criminals get better and more efficient with the utilisation of FinTech, it’s gotta be harder for enforcement of these rules. – That’s right, and that’s absolutely correct. So one of the large initiatives that many nation states, many governments are trying to rely on now is kind of the cyber security cooperation so between countries because the nature of what you described, it could be criminal activities, it could be other forms of access to data that we may want to control, it’s very difficult to coordinate information investigations as well as prosecute people potentially doing wrong so even in this Bangladeshi kind of this bank heist situation, you know, the Bangladeshis involved, United States was involved, the Philippines was involved, Sri Lanka was involved, and so because of this kind of transnational aspect of this, it’s very difficult for particular single nation states to deal with that alone so there’s been a lot of movement towards cyber security partnerships and alliances amongst countries in order to try to help manage this problem. Additional Readings 3.1.3 Case Study – Billion Dollar Bank Heist: Accountability
So, again, coming back to this. The kind of the moral decision making. The psychology of business ethics or FinTech ethics largely comes down to the idea of how our actions will impact those that are around us, right? And, how our actions may, perhaps, harm someone down the road. So, in this particular instance, and, from a legal perspective, the law is trying to ensure that those who are in a position to stop bad things from happening, stop it. And, then those that are harmed, they then receive some type of redress for their harm. Their compensation, or whatever. So, in this type of challenging cyber security situation, there’s a really simple question. Who is actually injured by this? And, then how do you, then, compensate them, or help move beyond that? ‘Cause if you can’t identify, from a law enforcement perspective, if you can’t identify who is harmed, oftentimes it’s gonna be very difficult for a government to have enough courage, or to marshal the resources necessary to really help those people. So, who is harmed here? – When you kind of think about what was the after-effect of this. You know, one of the things was, Bangladesh ended up suing this bank in the Philippines, where this money was transferred to. Now, ultimately, at the end of the day, this bank in the Philippines, may or may not have followed proper procedure. But, I think it’s very difficult to say that they were the perpetrators of the actual crime. And, somehow, they’re being held accountable for a minor mistake relative to the magnitude of the actual crime. And, so, to your point, I think there’s a large amount of disconnect, because of being able to hold people who are actually responsible, and being able to hold them accountable is pretty difficult. – So, then that brings up another related question. Is the idea… There are multiple parties along the way that are touching this transaction. Right? So, you have Bangladeshi, maybe regulators. You have Bangladeshi bank officials. You have the US Fed, and those who are touching it on the US side. You have Filipino banks, regulators, et cetera. And, really players all around the world. So, who is in the best position to actually stop this? And, who should be responsible for this type of a transaction? – So, I think there’s a lot of debate around that. And, to be honest, I don’t know if that’s actually settled. I think, for crimes committed in particular countries… So, if we could identify the source country of where the hacking occurred, that would of course be a locus of the crime. So, you would maybe have some prosecution there. Apparently some of that money was, of course, sent to the Philippines, and then kind of cleaned, or laundered, through casinos in the Philippines. And, so, it seems there may have been some sort of criminal activity there. And, of course, for people who use that money, and put it into the casinos, they may have been involved in this. But, they may not have but they have just been engaged and they may not have understood the full magnitude of where the money came from. Who knows. But, then again, there’s a locus of the… But, each of those crimes, though they’re part of the larger narrative… It’s very difficult for a local prosecutor, say, in the Philippines, or, wherever else that touches on this, like you were saying, to connect all the dots. – Yeah, you wouldn’t even have access to the information. If a Philippine official contacted the US Fed, there’s no way they’re gonna give them information… Well, it’s unlikely they’re going to give… – It’d be very difficult. – Very difficult. Very difficult. And, it would require, like national level support. Okay, so then, in thinking about how we move forward for these types of things, especially in terms of considerations of the future… ‘Cause this is only gonna be easier and easier, right? – So, let’s move this now to a distributed ledger type of system with like blockchain, or other types of cryptocurrencies that would be involved. Do you anticipate this type of thing being more or less likely, from it actually occurring? And, would it have changed the process of actually securing the funds, or finding who was responsible? – So, at a basic level, the issue is not the integrity of the blockchain or the ledger itself. The issue is… Once those coins have been distributed, how you hold them or store them. And, we have a series of examples of certain types of wallets. Or, exchanges being hacked where people are able to access those coins. Or, sometimes coins being held for ransom because people… they’re able to get hacked and not be able to, they lose access to them. And, so that raises another interesting question. Additional Readings 3.1.4 Case Study – Billion Dollar Bank Heist: Cultural Lag
– So, David, what did we learn from this case about cybersecurity and crimes? – Well, what I’m learning is that it’s really complicated. The cross-border nature of it means it’s incredibly difficult to enforce. It’s not just about the money, but there is reputational damage. There is embarrassment for governments, embarrassment for people. It’s really broad and wide-scale. There’s very little risk for the people that are actually going out and committing these crimes, and the cost to them is very, very little, so they can just spam these things out there and still make a really significant amount of money. So in this case, they failed the vast majority of the time, but they still walked away with $81 million. In a normal bank heist, that’d be like the biggest bank heist of all time. – Sure. – One thing that I don’t completely understand is what are some of the failures that allowed this to happen in the first place? I mean, how does this even happen? How do you lose $81 million? – Yeah, yeah. So the interesting thing about this is, when we think about cyber crimes, so frequently, obviously, there’s a technological component, and advances in technology will kind of create more opportunities or different methods in which to steal money or, as we’re going to look later, to steal data, but, if we think about this Bangladeshi bank heist situation, it’s really interesting because it wasn’t just about the technology. It was a confluence of different factors. There was people involved. There were processes that failed. There was old equipment that should’ve been updated that wasn’t, and so a confluence of these things actually led to this really significant outcome, and so I think, in a lot of situations, if companies, banks, and governments do, use a lot of the basics of security, making sure you have good anti-virus software, making sure you’re trying to filter out as much malware and viruses as possible, making sure you’re using firewalls. I mean, these are things that weren’t actually happening in this Bangladeshi bank case situation, making sure that you’re having updated equipment, making sure that your people understand the processes. So, for example, after this happened, and with the Bangladeshi bank, SWIFT and the New York Fed put out kind of announcements about, “Hey,” to be aware of this, and so, when subsequent attacks or attempts happen, like in places like Vietnam and Ecuador, they were actually able to stop that from occurring, even though they were trying to, again, manipulate SWIFT transmission codes and things. – So I guess the takeaway is, on the one hand, it’s kind of amazing that we’ve gotten to the point as society where, with the push of a button, money is flying around the world. This probably happens millions upon millions of times. – At least. – Every single day, right? – At least. – And so, on the one hand, although we’re focusing on the negative side of it, it’s actually pretty amazing that governments, and even in developing countries, say, Bangladesh, they’re able to bank with the New York Fed or transfer money all over the world, and most of the time, that works out for the benefit of everybody, right? But the flip side is, we also have to be cognizant of the challenges and making sure we’re staying ahead of these things because obviously, one of the big things is that, we’ve talked about a lot in this course is that the law and punishments are often retroactive and reactionary, and they’re not really able to stay ahead of these problems, so I assume we’re going to keep seeing some aspects of these things going forward. – That’s right. – We’ll always be, unfortunately, we’ll always be responding to the last crime or last situation, and that also maybe means unfortunately, we may not be able to react effectively to the last situation either, as it takes time to put in good policy, as it takes times to get everybody kind of onboard, to kind of, to the point that you raised earlier about the difficulty of enforcement, if these are transnational crimes, you have to get multiple jurisdictions involved- – Yeah. – To kind of police together. – It actually reminds me of, have you seen the movie, Catch Me If You Can? – Yes. – The Steven Spielberg movie? – Yeah. – So, so. – Great movie. – Yeah, Frank Abagnale is one of, considered one of the greatest fraudsters of all time- Right? And so, recently, within the last few years, he spoke at Google, and it was a really widely watched video where he was giving a speech about his life, and it was so compelling and really interesting, and one of the questions at the end that kind of struck me was, somebody from Google asked him, “Do you think you could’ve been successful as a fraudster today, given all the advancements of security and technology?” And his response, quite famously, was, “It’s easier to be a fraudster today than ever before.” And he said that he would’ve made so much more money had he tried to commit fraud in this day in age than it was at the time. So I think it’s kind of interesting. As technology’s advancing, more good things are happening, but it also widens the door for people to abuse the system. – Exactly. Additional Readings 3.2.1 Case Study – Apple v. FBI
Okay so, we talked a lot about data and the importance of data, but who’s responsible for protecting it? As we consider this question, let’s think about another situation. In December 2015, unfortunately, there was a terrorist attack in San Bernardino, California. The two attackers were eventually killed and the authorities recovered an Apple iPhone from one of the attackers. The FBI, however, were unable to access the iPhone because it was encrypted, which basically means there was a security password needed to enter the phone. The problem the FBI faced was that if they entered the wrong password a certain number of times the information on the phone would be totally erased. So the FBI went to Apple and asked them to decrypt the phone allowing the FBI to access to the information inside. From the FBI’s perspective, they thought that this was important because the information was necessary for their investigation, and could even prevent a possible, future terrorist attack. From Apple’s perspective however, they thought that this could potentially lead to what lawyers call a “slippery slope” — basically a precedent that might ultimately lead to greater intrusion and other privacy issues for its users. As a result, Apple rejected the FBI’s request to provide access to the locked iPhone. Now, this was so important to the FBI that they actually went to court to try and compel Apple to provide access to the phone. After some posturing though, the case was never tried. And so we don’t really know the exact answer to the legal question of whether the security risk was enough to compel Apple to open the phone. But this story raises a lot of very interesting questions that we need to consider. So, for our students – in this situation, let me turn the question to you, how do you feel about Apple decision? And why do you feel that way? Additional Readings 3.2.2 Case Study – Apple v. FBI: Trust
– So Dave how about in your situation, how do you feel about this? – Well if I understand things correctly. On the one hand, you have a really challenging scenario where as a government you’re trying to prevent crime. Alright, and one of the things we’ve talked about in this course, Is that any type of criminal prevention is largely reactive. And so as a criminal agency or a law enforcement agency, you want to be as proactive as you can and predictive as you can so that you can stop things from occurring in the first place right. But the flip side is um I It’s a scary thought to think that a government would have the right to access our personal data within a smartphone at any time simply because they demand it. If you think about what is contained in our smartphones now, it’s not just the text that we send, although that’s significant, it’s not just our images, although there’s a lot of those. It’s where you go every single day, what advertisements you stopped to look at, what payments you’re making, whose in your social network and who do you communicate with. And so the idea that the government would demand that is actually you know, kind of challenging. – That’s interesting but what about the argument that people might make, is by virtue of us using certain applications on our phones or by using the phone to do banking or other things that determine our location, our transaction history, our social relationships. There could be people that argue and companies that actually make this claim, by the the virtue of the fact that you’re using our product, we have access to that data. So how can we distinguish between that situation and governments wanting that data too because it seems like we give a lot of that data willingly almost to companies, but why are we not necessarily willing to do that when it comes to governments? – It’s a good question, I think first of all, I’m not sure I agree that most people give it willingly, they kind of give it ignorantly. – There we go, yep. – And so a lot of more progressive laws including in the EU, for example, are now saying that you have to opt-in to that data sharing, rather than opting out because again from a psychological perspective, we talked about a lot in this class and others, people are lazy and we often will agree to things that we don’t fully understand, especially when that information is hard to process. I think that’s definitely true with smartphones. One of the things with smartphones is that they took the world by surprise and a lot of our behaviours evolved within that ecosystem before we really understood the consequences of those things. So we’re seeing with Facebook and other things, companies that have in many ways taken advantage of our ignorance and our laziness and so now the law once again is retroactively going back and restricting that, and I think that’s relevant because although it’s important to think that a company can monetize our data, and that’s something we should be talking more about. It’s the reason why I think most people would be more concerned about the government knowing it is because they have the ability to conscript you to even further. Right, they have the power not only to give you freedom, they have the power to take that freedom away. And so I think for many people, the idea of government, any government having free access to that type of data is kind of Orwellian, 1984 type of scary amount of data. Additional Readings 3.2.3 Case Study – Apple v. FBI: Cultural Lag
And are there different examples of countries over the last few years, all over the world that have instituted certain types of these kinds of controls and filters? And do we feel that those things are necessary? And do we feel that those are the type of things that as users, we should somewhat willing at least hand over to the government? If not, where do we draw that line? – Yeah, so it’s tricky. Like a lot of people would look at certain governments and they characterise them as authoritarian or very aggressive in terms of their policing of peoples, but the reality is, London is one of the most surveilled cities in the world. – They have more CCTV cameras. – Yeah, they have more CCT cameras, I think per capita than any city in the world. And this is true in New York, DC. I lived in DC. There’s cameras everywhere. And so it’s true to a certain extent that we’ve already given up so many of these concepts of freedom and so much private information that we don’t even realise, I think to a certain extent what the effects of that will be. And so, yeah, I think as society we definitely need to take into account the freedoms we have already given away. But now as we’re looking at these things retroactively, it’s not just about us as the data providers or the government as the eventual user, but there’s these companies in between. And, and I think the question of FinTech, it is what is the moral obligation of those companies in the middle in terms of protecting that data in terms of, and I think it comes back to a fundamental question, do we own the right to our own private data? Right, I think that’s why the distributed ledger and blockchain I think are so appealing to many people is the idea that private data is one of the biggest and most important commodities in the world right now. And yet we individuals who the data is about, we have no control over who uses it, who, who sees it, who sells it, et cetera. And so I think, these are the types of things we have to figure out as society. – And I think those arguments and debates that you rightly described that, as societies we need to figure out, I think apply not just for FinTech, which of course they do, but as well as other advances who are making technology particularly in biotechnology. So if we think about the commercialization of DNA testing and as you provide different samples to companies so they can check your family’s history or health markers in your DNA. There’s a lot of debate and questions about, hey, once you hand that over to these companies who actually owns that DNA at that point? Because that is so uniquely yours yet, can you hand that over to somebody else? And again, that’s a very other, very similar situation to what you had described before about we sign up for apps ignorantly, in an ignorant way of not understanding what rights we’re giving. Similarly in these kind of DNA tests and other types of kind of biotechnology, kind of commercial kind of projects that are going on, there’s a lot of ignorance around, hey, what are you actually giving up to these companies? – Well, that’s a great tie and actually to this Apple case, because in the state of California, right? There’s that great case where the guy, they had this unsolved serial killer case and the police for decades didn’t know who this guy was and then he takes one of these blood tests or it wasn’t a blood test, but he did a DNA test, sends it in, didn’t realise that the data that was produced wasn’t going to be private and it ended up on a public database and that data from his DNA was actually used to find him and capture him as a serial killer. So many people in society, we’re debating this issue. There’s, on the one hand, they were super excited that you have this like. – Murderer off the street – Murderer, yeah, he’s off the streets, right? And he’s been caught. But the flip side is they’re like, wait, holy wait, how did they get his information? How did they know it was him? Because he put this information up and didn’t realise that it could lead to exactly identifying him as the killer. And so I think this is the, excuse me, this is the dichotomy that we face now, is that the utilisation of smartphone technology and all these technologies has opened up so many avenues in life that we’ve never been at, communication and financial transactions and data and knowledge and so many cool things. But we are simultaneously ourselves becoming the product. And I don’t think we really understand the repercussions of that yet. Additional Readings 3.2.4 Case Study – Apple v. FBI: Accountability
So if we go back to your original question about whose responsibility then, is it to protect data, where do you fall on that? – So importantly, as a consumer, I’m increasingly realising that, number one, it has to start with me. As a parent, I’m realising that I’m trying to do a better job of educating my children about privacy and data than my parents did, not because my parents are bad, but they didn’t have to face these issues. – Challenges. – Yeah and actually studies have shown, when they’ve looked at morality and decision-making psychology, young people today are very similar in their stances on moral decision-making in almost every regard, except for one. And the one big difference today, versus say, one or two generations ago, is the perception of privacy. And young people today do not have this same standard or high regard for privacy, because they’ve grown up on a stage. It’s a public stage, right? Every day it’s Instagrammable and if you didn’t click it, it didn’t happen and so we’ve given up so much of our own privacy that it’s no longer even perceived as a moral issue anymore, because there is no other option, in their mind. So I think it definitely starts with the consumer. So I’m gonna flip this around on you though, because from an Apple perspective, Apple’s business model, now, publicly, from a marketing perspective, is hey, buy our products because we don’t sell your data, right? And this is in part true, because a lot of their revenue model is based on the hardware that they produce, right? So do you think that they actually care about data and they’re using this as kind of a moral high ground, or is it just that they know that they’re making most of their revenue off the hardware anyway and so this is just kind of a marketing ploy? – Yeah, that’s an interesting question and to be honest, I don’t think those two things are mutually exclusive and certainly, for Apple executives, I’m sure they feel like they do perhaps have a moral high ground, because of this experience that they had and because they currently, their business model doesn’t require them to monetize their data they have of users to send out to external sources by the nature of their business and the ecosystem in which users operate, since it’s almost all within Apple. – Yeah, it’s enclosed, yeah. – It’s enclosed and so at a certain point, if that business model changes, will their ability to take that moral high ground change? Perhaps. The profit incentive is, can be really powerful for listed companies. – See, this is where I struggle with this and so my history, as you know, is I used to be Apple’s outside counsel, within Asia. They had a direct phone line to me when I was still working for my law firm and I had a lot of close interaction with various people within Apple at the time of the iPhone, the iTouch. These things were first being introduced into Asia and one of the things that immediately became apparent is that the profit margin was paramount and the profit margin on the hardware, at the time, was well over 60%, right? Just imagine, that’s like cosmetics type of profit margin. And the interesting thing about Steve Jobs and the business model that he made is that it was fully self-contained because he wanted people in an ecosystem so they essentially, would have to… – just use Apple products. – Yeah, because you have to give up a lot to leave that ecosystem, right? So whether it’s the earbuds, or all these things, they can only work within the Apple ecosystem and so now, interestingly enough, you now have, say the Android model, which is the exact opposite, where it’s like, flood the market with these things, as broadly as you can, open the software… – And they don’t care who the hardware are from. – They don’t care who it is, because their revenue model is based off of the selling of the data and so I’m not so sure that Apple is altruistic or moral in this way. I think if they had built a model that could generate revenue in a way that’s similar to Google, while simultaneously maintaining the profit margins on their hardware, I think, I mean they’ve proven that profit is paramount. – And so again, kind of to what I was saying, I think. When push comes to stuff, particularly for listed companies, publicly listed, trading companies, that profit motivation frequently overwhelms any kind of moral principles that CEOs and companies may espouse to, which is unfortunate. Now, we do see certain leaders, now more and more, taking on more of an activist approach, beyond just their business, into certain aspects of political activism and having a voice when it comes to certain moral issues, which, I think, perhaps is a good thing and I think it’s necessary for such leaders to contribute that voice to these kind of debates in order for us, as users, them as the producers of these products, as well as for governments, to collectively try to think about how we can manage these issues around data and protecting data. Additional Readings Kezer, M., Sevi, B., Cemalcilar, Z., & Baruh, L. (2016). Age Differences in Privacy Attitudes, Literacy and Privacy Management on Facebook. Cyberpsychology: Journal of Psychosocial Research on Cyberspace , 10(1). Retrieved from https://cyberpsychology.eu/article/view/6182/5912 3.2.5 Case Study – Apple v. FBI: Privacy
So we we kind of fly by the fact that, the US government asked Apple, to build a backdoor in the first place. Right. And so I think many of us maybe just assumed, like, shouldn’t the very idea, that they can build a backdoor into these things, and can if they wanted to, they could access this data essentially, whenever they want, you know, assuming the agreements would allow it. What about the idea of a technology company, having that ability in the first place? There are huge markets globally, in terms of you know, smartphone technology, and other types of FinTech data, where people maybe don’t even realise, that these back doors do exist already. And in some cases, it’s almost a free flow of information, from the user to the company, and then eventually to the government. So is that even moral or ethical in the first place? – If we just kind of extrapolate from Apple, to other kind of technologies, either apps, hardware, you know, wallets that may hold cryptocurrencies, you know, almost all of these have some sort of protection, password, keywords for whatever it may be. And some of those immediately have what you call backdoor, and some of them don’t, so frequently a lot of wallets, once you lose that password, your key to get it. That’s it, you lose it and the coins, or whatever value you had in there is now gone. So more broadly do we have, Or do governments then have a responsibility to police that? – Right. – That’s an interesting question. I think if we tie this back into the Apple case, the question then is, at what point does the need for, the government to access certain information? – Security safety. – Rise to the level where the company, or privacy is then compelled to do it? Now we have lots of history, in a lot of countries, or governments frequently go to somebody say hey, I know you have this kind of, You wrote this paper, the Smithsonian paper, give us that evidence, or you videotape something, give us that evidence. Right. So that, you know, this is not uncommon, and a lot of type of criminal investigations. So there is that analogy that can be made to that. But, you know, the aspect of what we’re talking about this this data, some of which is, is still private, and may not necessarily always be directly tied, to a compelling government interest, should the government automatically, have some way to access that information. – Yeah, I think that’s questionable. But I think if we look through history, there have been situations where governments, do demand that of companies. – Yeah. And it’s, I mean, this is like the new form of national security argument, right? So before national security, was guns and bombs etc. Now, it’s knowing where people are, and it’s mass behaviour modification. So the bike sharing apps, I think, are good examples of this. Where you know, it’s not just about money, going between a customer and a vendor, is the idea of understanding where people are at all times. So my last question for you is, when you see a case like this, do you have an iPhone? – I do. – You have iPhone, okay. So when you have a case like, – And am wholy inside ingrained, – Within the Apple ecosystem, You’re in the Apple ecosystem, so good, they’re not using your data do cases like this, make you question carrying a smartphone? Not that you’re going to commit any, – That’s right. – Wrong acts or anything but, but do I mean did you ever like stop, and say or for your kids, you’ve got two kids, right? Did you ever stop and say like, do I want a smartphone in my kids’ hands? – Yeah, so this is when you ask that question. That’s immediately what my mind went to. And I think for a lot of young people, some of them may not be, mature enough in certain ways to understand, kind of the issues that surround some of these things. So like you were talking about this study, about how the younger generation, about privacy, compared to, you know, the generation before. I think it’s important that we educate our children, our young adults about the impacts, of this kind of the use of smartphones, the use of particular applications on smartphones, and technology in general. I think that kind of education, will make other aspects of forming good policy better. I think we know from different things, that we’ve touched on in the course, that informed consumers, will generally make better decisions. And if they make better decisions, then we can more fully utilise the positive aspects of these new financial technologies. as opposed to being used by them. – Would do you force your kids, to give you their password to their phone? You think? – You know, fortunately, they’re young enough. – But just in the future, do you think? – I don’t know. – Because here, because this is a microcosm of a broader question. So when we’re talking about our children, I, my children don’t have smartphones, that they carry around, but I would want to know, what they’re doing and I would see, that as my responsibility to keep them safe as a parent. And if you take that, extrapolate it out to the government, that’s exactly their point. Right? So I like your answer, because you’re saying it’s all about educating, and then letting people make informed decisions. But I think when you are in the position of authority, and you’re trying to protect people, oftentimes that kind of, desire to protect maybe overcomes, yeah. – And so my hope would be, and I honestly don’t know, my hope would be my efforts to try to educate, or effective in the fact that, I can hopefully trust them enough, where they could use this technology initially. And if there’s potentially an issue, then we may have another discussion about the use. – And then we’ll hack their phone. – That’s right. And then we’ll ask Apple to hack the phone. But I think a nice way to wrap this up, and to kind of get to the complexity of this issue, there was a quote by a guy named General Michael Hayden, who was a former director of the National Security Agency, United States, as well as the Central Intelligence Agency, and with respect to this whole situation with Apple, in the San Bernardino terrorists case. He commented in a report that this case, this may be a case where, we have got to give up some things in law enforcement, and even counter-terrorism , in order to prevent to preserve, this aspect of our cybersecurity. And so, he, I think he captured that well, in the sense that there’s a balance of national interest, but there’s also a competing interest, of how do we want to secure data like this cybersecurity, and this is an ongoing debate, that I think will continue to have and hopefully through, our students thinking through this course, and these questions can also contribute, to debate wherever they may be. Additional Readings 3.3.1 Case Study – The Sony Hack
In an age where data is supposedly the new oil, FinTech companies have raised serious concerns about data protection and compliance, especially in light of the recent spate of global cyberattacks as the presence of valuable personal information makes FinTech companies increasingly attractive targets for cybercriminals. Okay so, let’s dive into another story. On Monday, November 24, 2014, a typical week begins at the Sony Pictures Entertainment’s headquarters in Culver City, California – right next to Los Angeles. As employees begin arriving work they realize that this is far from an ordinary work day. The image of a skull flashes on every employee’s computer screen, accompanied by a threatening message warning that “this is just the beginning”. The hackers, calling themselves the Guardians of Peace, go on to say that they have obtained all of Sony’s internal data”, and if demands are not met, they will release Sony’s secrets. And because of the hack, the whole Sony network was down, rendering the Sony employees’ computers completely inoperable. The hack had brought the global corporation to an electronic standstill. On November 27, the hackers leaked five upcoming Sony films online. The is the first of what were to become many subsequent leaks in the days and weeks to follow. Speculations began arising that North Korea may be responsible for the attack, in retaliation for the movie The Interview which depicts an attempted assassination on North Korea’s leader, Kim Jong Un. Back in June, when the trailer was first released, North Korea had called movie an “act of war”, saying that it would carry out strong and merciless countermeasures. About a week later, the FBI officially began an investigation, and Sony hired a cyber-security firm to carry out an investigation of the attack. In the following days more leaks are published online, including the salaries of top-paid executives and more than 6,000 employee names, job titles, home addresses, salaries and bonus details. And reports also arose that Sony was fighting back, using hundreds of computers in Asia to execute a “denial of service”, a so-called DDOS attack, on sites where its stolen data were being made available. On December 7, C-Span reported that the hackers had stolen 47,000 unique Social Security numbers from the Sony computer network. With this data being leaked on the internet, other cyber criminals instantly swooped in – leading to various fraud, theft and other problems for Sony’s employees. On the same day, North Korea denied all involvement – but called it a “righteous deed of the supporters and sympathizers of the country”. And beyond just coping with the cyberattack and the various leaks, Sony were also challenged on other fronts, such as by former employees filing class-action lawsuits against the company which they argued had taken inadequate safeguards to protect personal data. And Sony also faced battles with the media, demanding the media to stop reporting on the stolen data, claiming that journalists were actually abetting criminals in disseminating the stolen information. On December 16, Sony hackers threatened a 9/11-style attack on theatres that showed “The Interview”, which led to theatres across United States cancelling their premieres, and Sony pulling all TV advertising, for the movie. Urged by President Barack Obama, to not give in to the hackers’ demands, Sony instead jumped directly to a digital release. On December 19, the FBI officially implicated North Korea in the Sony hack. North Korea proclaimed its innocence and in the following days, heated rhetoric emerged from both countries. Now, other security experts had some doubts about whether North Korea was actually involved in the hack. Another theory puts the finger at angry former employees, whereas others say it was the work of outside hacking groups that simply used the release of The Interview as cover for their actions. Now the challenge that we have is that, the Sony hack was not a single anomaly, as we are witnessing a huge influx in data breaches across the world. Now just to give you a few examples: In 2013, 40 million credit and debit card records were stolen from Target. And, just before the Sony hack, 56 million credit card numbers from Home Depot customers were also breached. In 2017, some of the biggest companies in America were also hacked, such as Yahoo, Uber and Equifax. In the case of Equifax, the hack compromised the data of around 143 million Americans, that’s about half of the US population and well over half of the adult population. And the hackers had gained access to over 200 thousand credit cards. And in 2018, we know that Marriot had a data breach affecting 500 million guests. So, with all these massive data breaches globally, important questions naturally arise around our key principles of trust, proximity, accountability, cultural lag and privacy. Like, who owns your data – and who is protecting it? Can you trust them? How may data protection be regulated? With recent technological advancements, are we able to protect our own data and privacy? We’ll discuss these questions, further, with you, in our next session. Additional Readings 3.3.2 Case Study – The Sony Hack: Trust
Okay, so I mean it’s interesting and that allowed people to maybe view Spider-Man a little bit early, but what is the connection between this and FinTech? This isn’t really like a FinTech case. – So that’s a great question, I think this case, the Sony hack leads into kind of broader questions about data and security, and I think those are things that we wanna talk about on the context of this module. But I think one, in terms of FinTech, we like to think if something has ‘crypto’ in front of it, somehow it’s maybe more secure than other forms of finance or data or other spheres of finance that we may be involved in. – It’s like we don’t understand it so we assume other people don’t too. – Perhaps, perhaps, right. – Yeah, yeah. – We have cryptocurrency, Bitcoin being probably the most representative of cryptocurrencies at the moment. And even then we know that participants related to the cryptocurrency market have been hacked, right? So probably the foremost example of that is there is an exchange called Mt. Gox that was based in Japan, at the time it handled most of the cryptocurrency transactions of the world, they ended up being hacked and losing their Bitcoin valued at billions of dollars and eventually they went bankrupt. And so that’s a very direct example of how cybersecurity, still are very relevant, even to things in FinTech that we think may be secure. Right, so that’s the first point. I think the second point is, a little bit more broader in the sense that as FinTech, and different types of applications of it become much more widespread, and populations that maybe didn’t have access to traditional forms of finance now do through not having to go to a brick and mortar bank but accessing banking services through their phones, right? – Yeah, yeah. – You would assume that a lot of these populations maybe are not as technologically sophisticated. And so as they get exposed to these new technologies, their concept of cybersecurity and how to protect their data will become an issue too, and they can potentially be a population at risk in terms of hacking and cybersecurity. So this is why this is a very important topic that goes hand-in-hand or in parallel to advances in technology and FinTech. – So one of the things that I think often comes up is when these types of things happen, it’s who owns the data, but also who’s responsible for this. So in the Sony case, what happened to Sony? Did they get in trouble for this at all? Was there any liability on their part? – Well there’s no criminal liability that we know of, but we know from a civil liability standpoint meaning somebody filing lawsuits that there were a number of lawsuits against Sony saying, hey, you should have been more responsible for how you protected that data than you were. – So is this customers, employees, shareholders, all of the above? – So I think, my understanding, the majority of the cases that were brought against Sony were generally former employees who probably had much more data with Sony because they had employee records and different personal information that ended up getting exposed. But if you think about it, that information is about you individually, or individual people, but it’s being held by somebody else, so who actually owns that data? – Right. – Does Sony own that data? Do you own that data if it’s about you? Because that idea of ownership then links into idea of responsibility, which then links into idea of protection. And then understanding that gives us a more comprehensive approach to trying to figure out who actually has a responsibility to protect all of this data. – And it doesn’t seem like, either from a regulatory standpoint, certainly from an ethics standpoint, we haven’t really answered those questions yet, right? – Really quick on the ownership and liability, it’s very interesting that in many situations, certain kind of social media, social networking services, that users will post different types of personal data, be it pictures, be it stories, be it videos, frequently these social media services, these social networking services will actually say they don’t own the data. – Yeah, yeah. – But they will say that they are licensing the data- – They don’t want the responsibility of ownership. -and then that creates all kinds of questions, well if we own the data now, then do we have to pay you for that data? But, so frequently the way they navigate this somewhat thin line is you still own your data but you’ve licensed it to us by virtue of you using our platform. And then in that situation, they can use it like they have owned it, but maybe they don’t have the same responsibilities as protecting it. And so this again raises a number of questions. Additional Readings 3.3.3 Case Study – The Sony Hack: Accountability
So one of the challenging things with this type of a data breach is it relates to time, right? It’s often very difficult for the parties involved to know when they were hacked and then after the fact, it often takes time for them to then react or even publicly, you know, tell people that the hack occurred, right? So how does that like impact these scenarios? – So time is a really interesting variable when it comes to these cybersecurity matters. So like, as you mentioned, frequent companies don’t know or only know later that they’ve been hacked. So then at that point, if something happened many years ago– – So it’s not like in TV, where it’s like, I’ve been hacked and all the lights are gone. – That’s right. – Everyone’s typing on the same keyboard at the same time. – Well, I think maybe in certain situations that could happen, I don’t know, but I imagine in a lot of situations, you know, a company has been hacked or data has been exposed, either intentionally or unintentionally, and they might not know about it for a prolonged period of time, and I think we’ve seen a lot of examples of that even the Mt. Gox situation that we talked about a few minutes ago is a situation like that, where the hack might have happened years before, and so, there’s a lot of uncertainty around this time element, particularly when did it happen? But then, on the back, let’s assume the company has found out, a day later, the same day, whenever it is, then how did they react? I mean, it seems from some studies that on average, companies take at least six months to react and kind of figure out what their next step is. One challenge that companies have is that each of these companies that goes through this has very different capabilities. So certain companies because they may have really good management processes, good leadership, good operational control, and, you know, teams that can, you know, some sort of emergency situation comes out and they have some sort of protocol that they go through. But there are a lot of companies, the reality is most companies are probably not really well managed, and maybe don’t know what to do when that happens, and then you have very interesting incentives that are particularly for listed companies. Companies that are publicly traded, that have this kind of issue, and, you know, debates that occur probably at the highest levels, both in the boardroom as well as amongst the chief executive officer level, about when they should reveal certain information should it be before a certain deadline in terms of quarterly cut-offs and things like that, because maybe they don’t want to impact share price, and, you know, – Or their job. and so there’s a lot of incentives, or disincentives that go into wanting to publicise or not publicise the information as well, and so this is a great challenge that we have, and then it goes back to this again, going back to the idea of who’s in charge of protecting this data then, right? Because if you’ve discussed, somehow whatever method the company has aggregated or compiled this data, then do they have responsibility or stewardship over that, right? You know, I think from an ethical perspective, we would say, yeah, right, if something has been left in your care, then you would assume that there’d be some level of responsibility to protect what has been left in your care. Now, it seems that that’s not always the case from the behaviour of business leaders. – Yeah, and one of the things I mean, let’s assume that North Korea was involved, let’s just say, and it’s not clear that they were, this is one of those cases that when it happened, you know, it kind of brought home to me this idea that personal data, is in many ways as important to national security as a border might be, and I had never really thought about that before. So what are kind of security implications from a data standpoint? – Yeah, so if we take a step back, you know, there are a lot of people who feel data will be the fuel of the not just FinTech but of perhaps the Fourth Industrial Revolution. So you know, people talk about all the technological advances– – AI with lives– – That’s right, all of this will be empowered or further enhanced by large amounts of data, and so at the core, you know, companies, large technology companies in United States and in China, other places a world, you know, a lot of them are branching out and building very large platforms, where users are participating on the platform through various different services that these companies offer, but at the heart of all of that, is that these companies are now having the opportunity to get a more fuller or comprehensive view of usage of data, and more richer data that can be used to kind of develop new products, but as well as develop profiles of people. Now, we already know that in China, for example, that at government level, they’re trying to develop social credit, so tie into your question, that has very direct implications on aspects of how that credit or that data then may be used. – National policy. – That’s right. – There’s examples of this where government officials have said they’ll use this in determining visa rights– – Who can leave the country and who can’t leave the country– – What jobs you can get, what you can study in university whether you can be a journalist, a lawyer, et cetera. Additional Readings 3.3.4 Case Study – The Sony Hack: Cultural Lag
Okay, one of the issues on the ethics side that we typically lump in is the regulation of this, right? And so part of the issue with data and data protection is that globally, there are different standards everywhere, right? And the nature of this data … Okay, again, we keep coming back to the idea that money is not in a vault anymore, right? It’s code somewhere, and so it’s information going in and out of servers. And when the data leaves a jurisdiction, it’s not like it’s physically leaving, right? But the server is maybe hosting data for someone in Hong Kong, in the Philippines, or travelling, the information could be travelling via lines, through the U.S. system. What do you think is the responsibility from a regulatory standpoint for a consolidation? How can we, as society, have standards for these types of things when you have this spaghetti bowl-like mixup of regulations globally? – Yeah, so that’s a really great question. The reality is, I don’t think anyone has a great answer to it. On one hand, the way the U.S. tries to extend their regulatory reach is basically there’s a number of laws, financial regulation laws that basically say if you use U.S. dollars for transactions– – Every bank does. – Which almost every bank, every country, large company in the world has to do in some way, then somehow, because of that, you are touching the U.S. financial system, and if you’ve committed some sort of crime, or that transaction is part of a larger network of maybe illicit transactions, then you’ve maybe fallen into U.S. jurisdiction. So there’s a set of regulations that get to this. Now, but like you’re saying, what if you’re actually not even using currency, and how does that work? So that raises a broader set of questions, as well. I think, by its nature, if we think about cybersecurity as well as cyber regulation, by their very nature, those are reactive things. They will always be reacting to what just happened. And so it’s very difficult to put bright-line rules in that say, “Oh, ABCD,” and as a result, I think as users, and consumers, and people who will be impacted by these advances in technology, we have a responsibility to kind of think within an ecosystem of the values and principles that we might want to abide by. – Yeah. – Because I don’t think we can … Continually, what we’re going to find, we can’t rely on law, and we can’t rely on governments per se, to be at the forefront of leading how we want to govern this aspect of the problem. – See, but this is the challenge, right? So, it’s not like a typical negotiation scenario where I’m going to buy something from you, and then I get the chance to say, “Well, I want the price “to go up or down,” whatever, right? Every single day, we click on potentially hundreds of websites where we are agreeing to their privacy policies. Sometimes you formally have to agree. A lot of times, it’s hidden behind the scenes. You’re not even paying attention to it, right? And so, on the one hand, that diminishes the value of those things, so essentially they’re pushing that burden on us, as the consumer, to say, “Do you agree to this or not?” But the challenge is it’s not like anyone is taking the time to read and understand those things. And then even if you did, it’s still not like you have the opportunity to negotiate. It’s not like you can, say, go to Facebook and say, “Okay, clause three, line number two, “I don’t think this is appropriate, “so let’s work that out”– – Yeah, the negotiation, it’s if you don’t want to use it, then don’t use our service. – So, same with the banking system, right? The financial system. Either you’re in or you’re out. And so it’s not like we really have a choice. So even if consumers wanted to have a choice, it’s either do you opt in, or are you gonna eliminate yourself from this entire system? – Yeah, so that’s very interesting. And so we can see some analogies, or similarities, to maybe a few other types of situations. So for example, in financial services, in financial services space, particularly in the world of derivatives, we have organisations that kind of market participants who got together to set a set of ground rules of how they want to transact with one another because they didn’t want to have a lack of clarity or grey area, or didn’t want to wait for government or law to come in and say, “This is how it’s gonna be.” And so I think, from the consumer perspective, you’re right. At the individual consumer level, we don’t have a lot of individual kind of influence. But I think collectively, there is some influence. Similarly, I think what we want to try to invite companies to do is to have these discussions amongst themselves as industry participants, as market participants. How do we want to create a fairer, more secure ecosystem for the product to be? Because ultimately, this is a very long-term game. But if they don’t have that discussion, then, in the long run, it will just become more problematic. – Yeah. Additional Readings 3.3.5 Case Study – The Sony Hack: Privacy
Start of transcript. Skip to the end. Getting back to the ethics of this, the saying goes that, “If you’re not paying for a service online, then you are the product.” – Product, that’s right. – Right. Yeah. And so– – Which is a great line, by the way. – It’s a great line, yeah, my students say all the time, you know, “We use this because it’s free.” I’m like, it’s not free. – You’re the product. You’re the data basically. – Yeah, exactly right. So the business model is now no longer even that thing overtly, it is the data they’re collecting behind the scenes, and therefore what they’re doing with the data behind the fact. Is there any consensus about the ethicality of that as a business model, especially when it’s often kind of hidden from the consumer, especially children for example, like a lot of games are free, so candy crush, those type of things, they’re free, they use the same psychologist that created the gaming system within, say casinos for example, to get your mind wrapped around one thing, I gotta do one more, I gotta do one more right. Is that somehow, kind of pernicious or unethical, or is it just an extension of you know, people’s weakness. – Well, I think that raises another great question. So we know, most applications that require, that basically monetize off of data or ads, and that require active users. Embed a lot of psychology, – Yeah. – Into user interface, – Totally. – Into what information comes into your feed, because overtime, they’re mapping the things that trigger you basically. – And, just to clarify when you say using psychology, essentially you’re saying, using the weakness that they know that exist within human behaviour collectively, – That’s right. – In order to keep us there. – From a behavioural science and psychological perspective, we know that we are less in control, than we often think we are. – Yeah. – Right, and there certain triggers, colours for example, information, sounds, that tend to have influences on people’s behaviour, and a lot of these companies that, particularly social networking sites for example, you know, spend a lot of time actually actively thinking about this, to ensure that users spend as much time as possible, on their site. Because as they do that, they’ll use it more, they collect more data from that and then are able to feed into the model again. – So my last question about this from a data standpoint is, we’re talking about these implications for us, what does this mean for the next generation, especially from an ethic standpoint because, one of the things that we’ve discussed is that, the only major difference that they’ve found kinda between previous generations and this generation from an ethic standpoint is their perception of privacy. – That’s right. – And they’re living in a world without privacy, essentially right? – At least in a way we would’ve thought when we grew up. – Sure exactly. And so, you know, how do we perceive the next iteration of this, do we think that with distributed ledger technologies and other blockchain technologies, will we be able to control our own data, own our own data, kinda determine what people see, or is this just going to be a new way to solidify this power of the data. – So that creates a very interesting dichotomy in terms of the future, because on one hand, there’s a big pursuit. Blockchain, and other technologies are in some respect, more anonymous, right? Even though they’re open, they’re also more anonymous in terms of kind of protecting– – As anonymous as the system wants them to be. – That’s right. And so, in some sense is some of the FinTech technology, technologies that we think about now actually creates, some greater levels of anonymity than might’ve existed in traditional financial system, but on the other hand, there’s a lot more information that was private that is now public as well. So it’s a very interesting dichotomy that people would have to live in as they get older, and I think when we think about our students, and our children as they grow older, they’ll live in a world that’s definitely less silo so thinking about, oh this is a bank, this is a consumer company, this is a store, those kind of distinctions I think will start, blurring as you alluded to. – Yeah, so one of my favourite kind of fake news clips of all time was from this website called The Onion, and they did a story, this is like you know, ten or more years ago, and so it’s very prescient in nature, but it was talking about Facebook, and they revealed, fake, this is just a joke, but they revealed that Facebook was actually a CIA protocol, that was a secret programme to get people to post their private information in a public way, and they were joking about it because they were saying like, they called it Operation Overlord, I think, and that the leaders on Facebook were actually CIA operatives, and the idea was they had been working as a, you know, an intelligence agency to get private information on people for so long and then now they realised that everyone just post it anyway and they have like logs of where they’re going and whatnot right? So the idea, obviously again as a joke, but the idea being that you know, we live in a way, in a society where so many things are open and even the concept of privacy as you said, it doesn’t even mean the same thing that it used to mean. – Well and to tie that back into something you mentioned about national security, so the Facebook example was perfect for that because we know in the lot the most recent US presidential election, there seems to be a lot of very clear evidence that there were certain elements, that tied to various entities in Russia that kind of used Facebook as a platform to try to influence certain election outcomes in the United States. And so, that is very much this idea of weaponization of data. And to influence outcomes that have very important national security considerations, right, who will be the leader of an important country in the world. And so we’re seeing that. – So the same use of data, that can make it easy for a large retailer to send you a personalised coupon, is the same analysis of data that can also convince you to pick a certain candidate and what is supposed to be a democracy. – That’s right. – This is a challenge. Additional Readings Module 3 Conclusion
In conclusion, after all the stories about cyber crime, illegal use of cryptocurrencies, hacking and breaches of data privacy, many people, unfortunately, connect the rise of fintech with only bad things. They’ve lost trust in the institutions and innovators who are driving these changes. And to be sure, there have been a lot of scary stories that require immediate attention. But it’s also true that these new technologies can change the world in so many positive ways. So, what do we do? – Well, once again, society has a choice to make. From a proximity standpoint, these concepts may seem so distant that we don’t really take the time to understand or even question them. For example, we accept the terms and conditions of websites, like iTunes and eBay, so often that we have become desensitised and don’t really think about the potential future implications. Be honest, how many of you actually read those? And for innovators, they’re often so distant, or non-proximate, from the users that they can’t empathise with their concerns about data privacy. – We have this seeming paradox that pits our legal rights of personal privacy against the vast efficiencies and desirability of fintech innovations. For example, most people love their smartphones, and even those who don’t really love them, are reluctant to give them up because they’ve become so integrated into our lives. – But after a period of culture like, we are all now becoming aware that by carrying around and using our smartphones, we are giving up some aspects of personal privacy. And we love the idea of being safe and secure, particularly from violent terrorist attacks. But when law enforcement asks large tech firms to decrypt our smartphones, that can be quite unsettling. – But, has the area of privacy already passed? Have we already given up so much personal data, via social media, and our Google searches and purchasing habits, that these questions about privacy are moot already? And from an accountability standpoint, maybe you think the big tech firms and banks are so big that you can’t do anything about it anyway. I know that I have become so numb to the announcement of large data breaches, that I don’t even really think about them much anymore. But that probably needs to change. – In fact, maybe the opposite is true. Maybe since we are now more exposed than ever, giving up significant personal data on a minute-by-minute basis, we actually need to have even tighter regulations and controls on the firms who are collecting, using, analysing and sharing our data. – Now here’s the part that many of you may not yet realise. The fact is, that in many ways, we are not only the consumers, but are in fact the product that these large companies are trying to monetize. How do companies like Facebook and Google, which allow us to use their main services for free, make money? Data. Our personal data is what drives revenue at these companies and many others. – So what should we do? How do we strike a balance between balancing our privacy and ensuring sufficient security and data protection? And who should be accountable for cyber crimes, data breaches and other illicit uses of fintech innovations? – And what we are seeing now is only the beginning. As 5G connections and quantum computing become more common, data collection and analytics are only gonna increase, driving the next iteration of machine learning and artificial intelligence, which you’re gonna focus on in the next module. Module 3 Roundup
– Welcome to our roundup for week three. Can you believe we’re already halfway through the course? Now, we mentioned this last week but it bears mentioning again. We really appreciate all the active participation in the discussion board. It has been really dynamic. I mean, the quantity of the comments has been great, but more really the quality of the insights and experiences that have been shared have really impressed us. We’ve been really blown away and there’ve been definitely a few times where mutually we’ve thought, wow, it’d be really cool if we could build on the discussions in a live classroom. – And we’re also really grateful for those of you who may have joined the course a little late but are not any less enthusiastic in sharing your thoughts, experience, and opinions with us. This course is really meant to be a continuous discussion, so wherever you are right now, please take your time and we’ll try to respond to some of the newer comments in the earlier modules from time to time. And that being said, we also really highly recommend that you read and comment on other people’s posts and take advantage of the full learning community. And, as I said in some of the feedback, the course is really only as good as the learners who are taking it. So we really do appreciate you and thank you and ask you to keep contributing your unique experiences and help further enrich the course. – So, we covered a number of really entertaining but also important cases in module three, which we hope compelled you to think through the implications of new technologies and how they intersect with crime and security. From the comments in the discussion forum, it seems many of you have been thinking about really similar questions too. So we want to spend some time addressing some of the great questions and contributions that were made. – But before we jump into that, just a quick update on enrollment. So, we’re over 5,000 students now, which is already way more than we ever imagined. And from the feedback we received from many of you, it seems the course has been informative and interesting. So, if so, please consider sharing it with your friends, colleagues, family, within your organisations, because we really genuinely believe these questions that we’re considering in the course are crucial to crafting a better future. Now, with that out of the way, on to the comments. – So, RichardStample had another great comment this week, which is becoming a pattern, very consistent. He had a great comment about, hey, the difference between something that is retroactive in the law perhaps versus reactive. So maybe Dave you could take a crack at that and just share your thoughts. – Yeah, so, first, I was actually really impressed. I made a mistake when I was speaking in that part, it was during one of the conversations we were having and kinda riffing back and forth, and I said that the law is retroactive and then kind of immediately caught myself and then said reactionary. But those are two actually distinctive and important aspects of the law. So, retroactive, if you’re not familiar, means that, using it in the legal context means that if you create a law, then it begins and is put into force at an earlier time. So let’s say, as of today, there’s a new law that says taxis are no longer legal. And if it was retroactive, you could say, and the date takes effect from January 1, 2019. And so therefore anyone who actually was operating a taxi service from January 1, 2019 would have in some way violated the law. There is this legal concept and it does happen, but typically a retroactive law in this fashion would be something that’s more positive. So, amnesty, for example. If you enter a country illegally and if you’ve been here for a certain amount of time, then they could say, as of this dates, people that have entered, or sorry, if anyone’s entered the date after this time then they’re retroactively kinda forgiven. So, what I meant to say, though, and what the conversation was really about was how the law is reactionary, is reactive, meaning that the law tends to… We tend to create laws in order to solve existing problems after they occur. And this is good because we don’t, if you think of a minority report standpoint, you don’t wanna make people, punish them for crimes they haven’t committed, and you don’t want the law to kinda predict what is going to be happening. That’s not what the law is for. But what that also means is if we’re always reactionary, if we’re always reacting to things that have happened in the past, then from an ethics standpoint it means that often you can have criminals or bad actors or just normal people doing what is technically legal but maybe a little bit unethical, and then the law is never going to be able to stay ahead of that. So, appreciate that, for pointing that out. It was something every time I listened to that segment I would always cringe a little bit because I knew I made a small mistake. But it is an important concept of the law and it gets into kinda cultural lag and why the non-material aspect of the culture like the law is very slow to change whereas the material aspects of culture like technology changes very quickly and there are often gaps in between the two. Okay, so the second comment that we wanted to point out is from joergHK. Again, a frequent commenter, we really appreciate all of your additions to the course. So, one of the things, it’s two comments in one basically, and he said that, he asked, he said, “Please don’t move fast and break things.” And for those that are not familiar with where he’s coming from, this is actually kind of a modification of a statement that was made popular by Mark Zuckerberg who said that, you know, in Silicon Valley– – So, the founder of Facebook. – Exactly, the founder of Facebook. He said, “We move fast and break things.” That was kind of the mentality of Facebook and has been adopted by many startup founders in the Silicon Valley region. And so joergHK was saying, again, bringing cultural lag into this, he was saying, “Please, just take a minute, slow down.” Break things, disrupt things, sure. But let’s take some time and make sure that as we’re doing so, we’re doing it in a way that’s kind of thoughtful in that regard. He also talks about, though, how there are some ways from a regulatory standpoint that governments and institutions can advance technology forward while maybe minimising the risk of the breakage, the disruption. And he mentioned something called a sandbox. And so I wanted to maybe ask you to kinda describe what is a sandbox, specially from a fintech standpoint, and how are they being applied. – Yeah, so, sandboxes are interesting. They’ve kind of become a little, come into Vogue in a sense in a lot of places in the world as financial markets try to understand how we’re gonna cope with these new technologies that come in with respect to current regulations. Because current regulations were made in the context of kind of a traditional market structure, and there’s aspects of new technologies, like fintech, that will come in and change how that happens. And so some creative regulator somewhere, I’m not exactly sure, said, “Hey, let’s have something called a sandbox.” This is not like the toy that you played in when you were little. – Although that’s what it’s named after. – But that’s what it’s named after. It’s this idea of let’s wall off this space and allow these innovators to play in this space, not subject or constrained by certain regulatory measures, and let’s see what happens. – Yeah, give them their toys and let them play and see what happens. – And that will give us indications of perhaps how we should regulate certain behaviours. But if we put current regulation on them, they may actually not be able to grow and it may not be applicable, but we wouldn’t know that because they’re gonna be constrained to begin with. And so by putting in a regulatory sandbox, that allows these kind of new companies that are kind of on the fringe of certain regulatory rules, it gives them an opportunity to expand a little bit, as well as for regulators to observe what happens and how that occurs. But at the same time the observation thing is important because they may not be subject to the current rules and regulations but they in theory should be observed. The effects, the impact that they’re having on customers in particular, how is that working. So one thing we were discussing in a class we had earlier today, a live class that Dave Bishop and I had today related to fintech was the success of regulatory sandboxes in particular jurisdictions like Singapore and Asia. And one of the things we thought was really great was that Singapore it seems it has coordinated a number of different policies in conjunction with their sandbox initiative, even from a few years ago. I remember hearing about what Singapore was doing a few years ago. In Hong Kong, we’ve only recently gone down this regulatory sandbox route and I think we’re still trying to coordinate this a little bit more with broader policies and different things that regulators are trying to do. So I think that’s quite an important, we think that’s quite an important piece to have if you really wanna cultivate innovation. Because if you have people in companies that put out new products but if they can’t test that in a neutral way not subject to kind of the same regulations that a fully licenced and staffed brokerage firm or bank would have to subject themselves to, then that can be very onerous on these new innovators. – Yeah. Great question, thank you. Or great point. – So, one of the other awesome comments that we got in the discussion board this week was about privacy. So, CelesteMunger, I think, it looks like she’s from Canada, talked about her thoughts on privacy and she thought, one of the things that she started off with was that privacy was an illusion. – Privacy is an illusion. Period. – And then she ended with an example of DNA testing, which is another large area where a lot of privacy concerns have been raised recently and will continue to be raised, particularly in the biotech space and technologies related to genetics and things like that. So, on that note, what do you think about these issues of privacy? I mean, they’re super important, but how do we think about them? – So, she is probably right to a certain extent. Privacy is an illusion from the standpoint of a fully traditionally private life because we are constantly, as she pointed out, being recorded and we are ourselves giving out significant information. But at the same time I’m not sure that’s what the definition of privacy means from a rights standpoint. I think if you think of the right to be forgotten, if you think about the right to be able to pull back your information, if you think about the right to be able to do what you want in your own home, which really is fundamental to many other rights in terms of human sexuality and having children, there’s so many aspects of that, it doesn’t mean just because we don’t have as much privacy in our lives as we go out in the public doesn’t mean that it’s necessarily eroded privacy as a right. And so I think this is the part that we as society have to do maybe a better job of really thinking through. As peter-nyc pointed out in a previous module in one of his comments, the concept of privacy, specially privacy as a right, is in and of itself a relatively recent legal construct. It only started happening a little over a century ago, and even up until the 1970s– – In the United States. – Well, correct. – And then globally later. But we’re taking about a US legal context. – Yeah, in the US legal context, it really started 150 or so years ago but really wasn’t institutionalised or even codified until the 1970s, actually, when some US Supreme Court cases, including the famous Roe v. Wade which dealt with abortion rights, where they said there was an implied right to privacy in the US Constitution, and so within the United States and other primarily Western democracies there was this codified right to privacy. And so, in that context, in those nations where that right still exists, I think we do still have a very strong right to privacy, although that does seem to perhaps maybe being eroded slightly. So I think there’s a distinction between the legal right to privacy versus how much information about us is kind of flowing out on a daily basis. And so it’s complicated but it’s important to be able to parch those things because as regulation comes in we want, I think, I should maybe speak for myself, I want more regulation on dealing with my private information, but that’s more– – That might not be a technical right to privacy. – Exactly. That’s like personal information that I wanna make sure is being used responsibly and that I understand what’s happening with it. But it’s separate from my overarching constitutional right to privacy which means when I’m in my own home I can do what I want, that type of thing. – I think that’s an important point. That potentially the link between traditional forms of right to privacy, which I think initially were like, in your own home, people shouldn’t be able to just come in and see what you’re doing. And that linked to people shouldn’t just be able to come in and look on your phone. Perhaps there is a link there, but I don’t think that link is traditional in a sense. And maybe that will evolve over time. But we tend to use the vocabulary, a right to privacy, in various forms and I think, to your point, that has evolved over the last century or so in what form that takes. So, early on it was about what you’re doing in your house. But even then, certain activities, physical activities, sexual activities, weren’t necessarily protected for many centuries and decades in America. And then that changed. And then when we get to women’s rights, that idea of what I do with my body, is that a right to privacy? What right does that fall under? Because in a lot of situations these are not explicitly stated so they’re inferred rights. And so, again, this will I think continue to evolve given how technology is evolving. – Yeah, and I do think it’s interesting because although DNA is not a fintech technology, it is a very interesting example that she’s provided. For those that are not familiar, just to give you an example of a case that happened in the US within the last few years. There was a gentleman in California who did one of these private DNA-testing services. You scrub the inside of your mouth, you put it into a vial, you send it in, and then they provide you DNA information about yourself. And maybe what he didn’t realise at the time was that as part of the user terms of service, you also agree for that DNA information to be uploaded on a public website which then becomes, I guess, I don’t know all the details, but public domain or something. – Yeah, usually. Because I actually did get my DNA tested on one of the commercial providers, I don’t know, a few years ago. It serves a lot of different purposes. It’s interesting from a genealogy perspective, you can kinda see where your forefathers came from, you can see, there’s health indicators that can be helpful. And there’s some questionability about how super high accurate they are but it gives you a general sense. But yeah, one of the things I remember as I was doing research was they take you through a series of terms and conditions and they basically say, “Would you allow your data “to be included in certain databases “that will be used and tested and whatnot.” And I always opted out of those because I was kind of aware of those issues. So, basically, the fundamental question is, where will that end up eventually? And you actually don’t know that, it’s not clear to you. And until that was clear to me I didn’t want to participate so I opted out as much as I could. And I think, to your point, this person didn’t do that, which ended up– – Do you know what happens? – Yes. – Okay, go ahead and finish. – Well, so the FBI apparently was looking for a, I don’t know if it was the FBI, but police authorities were looking for someone who apparently had killed a number of people. – A serial killer, yeah. – And they had some DNA evidence and through basically linking of genealogy, so genetics of family trees, they were able to figure out, oh, this person was probably related to this person, and eventually figured out it was this particular individual who had actually provided his own evidence himself that ended up leading to his arrest. – Yeah, so it was really kind of amazing and yet scary at the same time. So a lot of people that read this story were like, “Wow, that is so cool.” – Like CSI, TV show. – Yeah, CSI. I mean, these cases had gone cold and I think in the 1980s, so it’d been 30 or more years. The idea is the killer is long gone, there’s no way we’re gonna find them. And then boom, you’ve got him. But then it’s like, oh, wait, wait a minute. This guy sent in his vial of DNA, he was not expecting this to be ran through a criminal database. And so, again, there was a good outcome, you found a serial killer, but I think it caused a lot of people to think, now, wait a minute, what’s gonna happen 10, 20, 30 years into the future? What if they want to, whatever, because of ideology or race. – Yeah, and this becomes one of these double-edged swords because I think probably when we were in law school, a number of law schools in the United States got involved in Project Innocence, where they were basically trying to represent people who they thought were falsely imprisoned. And one of the ways that they were able to help a lot of these people that were incarcerated, usually minorities, socially-economically very kind of disadvantaged, was through the advances in DNA technology. So, oh, actually this evidence that you have is not this person. And then they were able to free a number of people. So, again, of course you don’t want people to be incarcerated wrongly, but at the same time, you can see a lot of different situations where a proliferation of this kind of data where it becomes commercialised or commoditized and then it ends up in the hands of actors who are using it maybe not for nefarious purposes but it’s for profit, then it ends up becoming a problem. So you can easily think of people that need insurance, yet an insurance company getting genetic markers, and even if you’re not sick they say, “Well, you’ve got an X percent chance “that you’re gonna get sick with this disease “so we’re not gonna insure you.” So these are not the type of necessarily, I don’t think, the outcomes that we want. Or at the very least these are the type of things we wanna think about before just wholesale, let’s open this up. – Yeah, so, again, like many things in the course, double-edged sword. There’s a lot of benefits that can come from this, probably some unintended negative consequences, and so we need to be very thoughtful about these things as we roll them out. – Our heartfelt thanks again for your participation and contributions. Putting this course together definitely was not easy, a labour of love with the emphasis of labour. But your enthusiastic engagement has really made the effort worth it. – Now in some ways the next module is really our favourite. In module four we will explore artificial intelligence implications, which is and will only become more relevant in the future. And we’re sure that many of you are already thinking about artificial intelligence in some way, and we hope the content is interesting and really look forward to your thoughts and reactions. So we’ll see you next week. Module 4 Artificial Intelligence and FinTech 4.0 Module 4 Introduction
rt of transcript. Skip to the end. So, welcome back, we are halfway through the course now, and now you get to celebrate: So imagine that a friend calls to inform you that she has won two tickets to a concert with your favorite musician performing. The concert is this weekend and your friend invites you to use one of the tickets that she has won. Wow, what a great friend, huh? Now you are super excited and can’t wait until this weekend. As you and your friend enter the lively concert venue, you notice an impressive kiosk covered with multiple flatscreens showing video footage of your favorite musician performing. So you stop for a few minutes to watch some of the videos cycle through and now you’re really excited for the concert and head to your section ready for a great show. Like you and your friend, thousands of other concert-goers also stopped at the kiosk to watch videos in preparation for the concert. However, what neither you, your friend, nor the other concert-goers realized was that while all of you were watching videos, cameras embedded in the kiosk were also watching and taking photos of you. Your image along with most of those other fans that stopped in front of the kiosk were captured and analysed by facial recognition technology. You see, your favorite musician has a number of stalkers that have made various threats over the years, so the facial recognition analysis was a precaution to identify anyone that might be potentially dangerous. Does this seem like a scene out of a movie? Or is this a type of technological Big Brother intrusion that seems at least a few years off? This may be surprising to many, but this story is not an imaginary future, it is actually the past, and describes what occurred at a Taylor Swift concert in May 2018 as reported by the New York Times. Besides sharing what was until now our secret, undercover interest in Taylor Swift, this story raises a few important concepts worth exploring. Now we don’t claim to have all the answers, but we’ll share some of our thoughts, and we invite you to consider these questions as well. First, given the potential threat of stalkers, were the actions of setting up a covert photo-taking kiosk and using facial recognition technology reasonable? And, would your opinion change if someone was caught versus if someone wasn’t caught? And should it? Second, and more broadly, should people be informed that they are being recorded and that the images are being analysed, processed and potentially being included as part of a database? At the Taylor Swift concert, the cameras were not readily visible. But the reality for most people, especially in urban locations, is that we are really under near constant surveillance already. To use another concert example, in April 2018, a man by the name Ao went to a concert of 60,000 people in China – and unbeknownst to him, during the performance of Jackie Cheung, a Cantopop superstar, all of the people within the audience were having their faces surveilled by cameras. And right in the middle of the performance, police went down the aisle and they actually apprehended Mr. Ao and took him away. It turned out that he was a wanted criminal and during the time of the concert, as he was sitting there, unbeknownst to him, they were able to realize that he was a wanted criminal and took him to jail. In another example from China, this public surveillance was highlighted by BBC reporter John Sudworth back in December 2017. Now, it is estimated that there are at least 170 million surveillance cameras all over China and the plan is to install upwards of 400 million cameras over the next few years. So Mr. Sudworth, he visited the city of Guiyang, the capital city of the Guizhou Province of China, which is actually only a few hours from us here in Hong Kong. While in Guiyang, Mr. Sudworth participated in a little exercise, where he was tasked with avoiding detection from Guiyang’s network of cameras for as long as possible. Now Guiyang is home to about 4 million people, so it’s not a small place. How long do you think he was able to avoid detection? Well… He was discovered and detained by authorities in about 7 minutes. Below this video we provided a link so you can watch a short clip of his experience to put it into context. Additional Readings 4.1.1 Public Surveillance – Privacy vs. Security
So we just wrapped up these very interesting stories and experiences about the use of public surveillance in identifying and capturing people in a variety of public settings including train stations and concert halls. So, let’s ask a more kind of fundamental, basic question then. What are the actions of setting up covert photo taking kiosks or relying on this wide-ranging and wide-scale facial recognition technology Is that reasonable? And if so, when? – Mm. – Dave, what do you think? – Yeah, it’s tricky for me, because on the one hand, I completely understand the kind of public security standpoint. But as someone who grew up in a very conservative place, I guess my immediate, initial thought is one of privacy. Right? – Okay. – So even if there’s one guy in the crowd who may pose a risk, to say Taylor Swift, or maybe even the community, there’s 59,999 other people that are not really posing any threat, and yet they are having their face scanned, information about their location, their preferences, the things that they like, being recorded, and the question, you know, I just, that, for some reason, doesn’t really resonate. It feels really weird to me. – So, I understand the privacy argument and I think it’s important. And I think people like to think at least that hey, I’m a person unto myself that should be respected. But what are the real costs for somebody in that audience who, let’s say, is not that criminal or not that threat to Taylor Swift or a criminal who’s being taken out of the concert hall by the police. At the end of the day, its sounds like their privacy is actually still being preserved, right? – But the thing is whether we recognise that or not, we are under constant surveillance. And again, you could say that they got the one guy, they got the one bad guy, but everyone else, you know, their privacy wasn’t really violated. What happens when they’re looking for someone based on ideology? What happens when it’s not a benevolent government that’s utilising that technology? What happens when it’s not a government at all, – Mm. – and it’s private actors that are utilising those technologies to somehow bifurcate society or to restrict rights’ mothers. I mean it really doesn’t require that much imagination to concoct a scenario where an individual, a large company, or even a government could utilise these types of technologies to single out people and potentially cause them very significant, personal injury. And maybe I’m old-fashioned, and I know that you have consumers that are actually choosing this on their own, either knowingly or unknowingly. They’re putting watches on children, that’s surveilled them everywhere they go. Obviously our phones, to a certain extent, are kind of watching where we go. And so maybe I’m being naive as a consumer, and maybe this is already occurring, but the idea of linking these things with facial technology or facial recognition software, geolocation and government police powers is something that’s actually quite disconcerting. Additional Readings 4.1.2 Public Surveillance – Accountability and Cultural Lag
So I think the idea of regulation is actually quite interesting and we’ve talked about it both in the context of this module, as well as previous modules but more broadly, should people be informed that they are being recorded and that their images are being analysed, processed, stored and used in other ways. Is that something that regulation should be concerned about? – Yeah, so the easy answer is yes, of course. I’m mean, I’m sure if someone’s recording you, you’re gonna wanna know it and most laws around the world, they do already have some level of notification requirement. Unless there’s say a journalistic exception within the law. But here’s the problem. So, with anything that’s ubiquitous, meaning it’s around us all the time, we become so desensitised even to warnings, that we just tend to ignore them. So think of like a streetlight or something, right? There’s so many things that are there to kind of guide us, protect us day in and day out– – I guess an example of that would be, like all these signs we see CCTV in operation – Yeah, exactly. – which we see everywhere. – Which is probably there just because of a legal requirement. – It’s a legal requirement. – To notify you. – Exactly, and so therefore, if they were to use that video recording against you or perhaps in a court of law, they would be able to say, we were authorised to do so because we met this bare level requirement. – Notification. – Exactly. If you think of the Taylor Swift example, though, very few people when they buy a ticket to go to a concert are actually gonna read through the terms and conditions of that particular event. I don’t and I’m a lawyer. I’m sure, you know, the same thing probably for you. And on a daily basis, we click I accept, I accept on so many notifications, that again the kinda ubiquity desensitises us to the fact that these are real legal notifications. So, I think we have to start thinking as society, if we’re gonna take this stuff seriously, what are not only the moral, but the legal implications in a very practical context to make sure that we’re taking these notifications seriously, and that we actually understand what rights we’re giving away. Because the reality is I think, every single day, we’re giving away pretty significant rights. – And so I think that’s really interesting. So, there’s a whole area that is somewhat regulated, and so the example would be – Right. which you just described is there’s a lot of laws talking about notification of when you’re recording somebody, be it audio, visual, whatever. But then there’s this whole other area of law that is still completely unsettled or unregulated – Yeah, yeah. Which is what we’re dealing with now in the context of AI. is, okay now that you’ve processed and analysed all this data, what legal obligation do you have if you’re the one whose processed or analysed this towards the person that you’ve actually recorded. And actually in a lot of places in the world it’s completely unsettled – Yeah. So much so that there’s actually people or companies that can use that data that they’ve analysed or processed and maybe sell on to the third parties. – Yeah. Right, and that’s purely because it is unregulated. And so that creates an interesting space to what you’re talking about is hey, if we don’t as citizens of whatever countries we’re in, or as people, as just citizens of society, if we don’t articulate the values that we want regarding privacy or security or whatever it may be, – Right. then it’ll be very difficult for us to roll back – Extremely difficult. or identify, or partition off the rights that we do wanna protect. – Right. Yeah, and this is a great example, if you remember going back to Module 1, we talked about cultural lag and the idea that it often takes time for the culture within a society to catch up to the change, very rapid change in technology, right? And, you know, thinking of within my own classroom for example, I often asked my law students raise your hand if you have a camera with you. And there’s usually kind of a few seconds of stunned silence and then immediately it dawns on them, that yes they do have a camera with them, right? – They probably have more than one. – Yeah, smartphone, right, even a smartphone alone has multiple cameras and so then again, I ask them, okay, well now raise your hand if you have two, they then realise on their laptop, on their iPad, in all these devices they actually have multiple cameras with them right there in that moment. And so, you know, if you think about that from a cultural lag perspective, these technologies change so quickly that we have them on our person at all times. Which means that we as individual citizens are also the ones that are kind of surveilling those that are around us, right. Now what do you see? You go on YouTube and you’ll see interactions of an auto-accident where normal everyday cars are filming everything that’s going on, right. You’ll see individuals getting into a fight, or an altercation, they automatically whip out their phone, right. And so it’s interesting how, again, we’re not just talking about governments here. And these technologies are expanding so that the ability, the costs, the size of the files, the stream rate all these different things are making it so that, this is really around us all the time. And again, we have to take some time to really evaluate from a cultural perspective how we expect these things to evolve, because if we don’t, then the companies through various forms of capitalism are gonna make those decisions for us. – Yeah. And those lessons are broadly relevant to artificial intelligence, but also, specifically relevant for the issues that we’ll face FinTech and financial technologies. – Absolutely. Yeah. Additional Readings 4.2.1 What Is Artificial Intelligence (AI)?
Hopefully, you are still with us and not scrolling through Taylor Swift music videos. Because what we’re going to explore in the rest of this module is interesting and important… and Taylor Swift will still be there after we’re done, we promise. Initially, the story about Taylor Swift’s concert or the BBC reporter’s experience in China do not seem to be related to FinTech, right? So where is the connection? Well the advances in surveillance we’ve shared with you, are not just about more and better cameras, but really about the facial recognition and identity analysis software that is growing more efficient due to advances in artificial intelligence (“AI”) and other technologies that fall under the broad umbrella of AI, like machine learning. Now if that phrase is vague to you right now, don’t worry, we’re going to get to that soon. Now people have been working on facial recognition software and forms of AI for a while. In fact, a trio of early technologists, Charles Bisson, Woody Bledsoe, and Helen Chan, researched how computers could be used for facial recognition as early as the 1960s. So today’s “hot” concepts did not just pop up, but because of the increases in computing processing power, the potential of AI is starting to be realized, which has propelled AI into the public discourse, and rightfully so. So what that means is for those of us participating in this course, you and me, in our lifetimes, many of the big leaps in FinTech will be enabled because computing power that has resulted in more mature, developed AI. Thus, a major theme of the still developing FinTech story is about the increasing influence and applicability of first, machine learning, and more broadly, artificial intelligence. And this is what we want to explore in this module. So to help us get started, let’s consider a few terms, some buzzwords, so that we have the right vocabulary for our discussion. Now keep in mind, the definitions of many of these terms are not uniformly consistent yet, and even experts themselves may have slightly different approaches or views, but we went with a few definitions that we think are not just comprehensive but also comprehensible even if you’re not a technology expert. So what is artificial intelligence or AI? AI is really an umbrella term that encompasses a number of technologies, but before jumping into that, let’s start with some history. Alan Turing, the pioneering English computer scientist and mathematician, and at least one of the grandfathers of AI, first started considering AI concepts even before 1950. His eponymous Turing Test, which moved beyond the question of “Can machines think?” to the more nuanced question of “Can a machine imitate a human” is interesting. And basically, if a computer and a person were answering questions that you asked, but you didn’t know which answers were given by the human or the computer, would you be able to identify the computer from its answers alone, or could the computer trick you into thinking it was a person? And John McCarthy, long-time Stanford professor and one of the fathers of AI, who is widely credited with coining the term “artificial intelligence” expanded further. To “Uncle John” as he was referred to among many of his students, AI is the “science and engineering of making intelligent machines.” But what then is intelligence? Stephen Hawking is widely attributed with saying, “Intelligence is the ability to adapt to change.” And so the increasing capacity of machines to learn and react as new data is presented represents this process of adapting that is at the core of Hawking’s view of intelligence. Increases in computing power coupled with the creation, collection, and analysis of an ever-growing amount of data will continue to enhance the capability of artificial intelligence. Additional Readings West, D. M. (2018). What is Artificial Intelligence? Brookings Institution . Retrieved from https://www.brookings.edu/research/what-is-artificial-intelligence/ Turing, A. M. (1950).Computing Machinery And Intelligence. Mind , 59(236), 433–460. Retrieved from https://doi.org/10.1093/mind/LIX.236.433 Sharkey, N. (2012). Alan Turing: The Experiment that Shaped Artificial Intelligence. BBC News . Retrieved from https://www.bbc.com/news/technology-18475646 Torres, B. G. (2016). The True Father of Artificial Intelligence. Open Mind. Retrieved from https://www.bbvaopenmind.com/en/technology/artifficial-intelligence/the-true-father-of-artificial-intelligence/ Cameron, E. and Unger, D., (2018). Understanding the Potential of Artifical Intelligence. Strategy+Business. Retrieved from https://www.strategy-business.com/article/Understanding-the-Potential-of-Artificial-Intelligence?gko=c3fb6 Brundage, M., et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Future of Humanity Institute . Retrieved from https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf 4.2.2 What Is Machine Learning?
Now that we’ve touched on AI, let’s move on to machine learning. Machine learning really is a subset of AI, and often when people refer to AI, they are usually talking about machine learning, especially when it comes to FinTech. For example, advances in algorithmic trading are being powered by machine learning. Additionally, the ability of financial institutions to manage risk, detect fraud, and even optimize operational processes are all being made more efficient and accurate through machine learning. Even lawyers like us, correction—former lawyers, who maybe thought we were immune from technological change are being impacted as machine learning technology is already being implemented to review documents, like contracts or loan agreements, much faster, cheaper, and even more accurately than a human could. Sounds exciting right? So let’s jump into it. What is machine learning? Now machine learning is effectively a machine, say a computer, combing through and analyzing with statistics, large amounts of data to find patterns. Now that data could be in the form of text, like in a loan document, or it could be a series of numbers, like stock prices, or a whole host of other types of information. Now based on that data, the machine, can start making predictions, as more data comes in, the predictions become more refined. Now most of us interact with machine learning almost on a daily basis — basically whenever we enjoy any kind of service that recommends things to us, you know, like the new show that Netflix is going to recommend to you tonight. Lastly, machine learning can be further specified as supervised learning, where the data is labelled or identified; unsupervised learning, where there are no such identifying markers; or reinforcement learning, which is what Google’s AlphaGo represents, and based on the machine figuring out things after exploring multiple permutations of outcomes —so basically there’s like a massive iteration process of trial and error. Additional Readings 4.2.3 What Is Deep Learning?
Since we’ve mentioned machine learning, it’s important to briefly touch on something called, deep learning. Now we won’t spend much time on deep learning here, but because many advances in FinTech will be built on deep learning moving forward, it’s worth explaining, even for just a few seconds. Deep learning is basically an enhanced form of machine learning that uses algorithms that emulate the neural network of the brain —basically how our brains learn— to help the algorithm learn through a progression of layers that get “deeper” and “deeper” as more data is incorporated. So like machine learning, deep learning can be supervised, unsupervised, or reinforcement-based. If you weren’t familiar with those terms before, hopefully they make sense a little more sense now, and hopefully you also better understand the relationship amongst AI, Machine Learning, and Deep Learning. And we also hope that you’ve noticed that these forms of AI all rely on massive amounts of data, which is why our discussion of data at the beginning of this course is so important. Data truly is the fuel that will power AI-backed FinTech innovation. Additional Readings 4.3.1 AI and the Trolley Problem
We’ve just discussed AI and data. Now, let’s think about that in the real world. By returning to something we discussed in module 1, where we introduced the trolley problem. If you recall, when we discussed the trolley problem we talked about two scenarios where a runaway trolley was about to hit a group of people. In one of the scenarios you had the choice to divert the trolley with a switch which would change the trolley’s direction and hit only one person who is killed by the impact. In the second scenario, instead of a switch however, you would have to push a person in front of the trolley to stop it – thus saving the group of five – but killing the person you pushed. Nearly everybody chooses to divert the trolley with the switch, and nearly all object to pushing a person into its path. Now this dichotomy highlights the important aspect of proximity in people’s decision-making, such as how proximate or close we are to a given context, or how personal it feels can alter our decisions completely. In recent years, the trolley problem has morphed into other dilemmas that have become popular in the news and in the media. This is especially true for AI and self-driving cars. With autonomous vehicles on the horizon, self-driving cars have to handle choices about accidents – like causing a small accident to prevent a larger one So, this time, for our hypothetical scenario, instead of a runaway trolley, think of a self-driving car, and instead of a switch to redirect the car, the “switch” is the self-driving car’s “programming”. So for example, imagine a self-driving car driving at high speeds, with two passengers, where suddenly three pedestrians enter into the crosswalk in front of the car. The car has no chance of stopping. Should the car hit the three pedestrians, who will likely get killed? Or crash into a concrete barrier which would lead to the two passengers likely dying? Now imagine you are the passenger of the car, what would your answer be then? And what car would you ultimately buy? A car that saves you, the passenger, at all cost in any scenario, or one that minimizes harm to all – but which ultimately may affect you? If there was no self-driving vehicle, and you were the driver, whatever happened would be understood as maybe a reaction, a panicked decision and definitely not something deliberate. However, in the case of a self-driving vehicle, if a programmer has developed a software, so the vehicle will make a certain type of decision depending on the context, then in an accident where people are harmed, is the programmer responsible for that? Is the car manufacturer responsible for that? Or, who is responsible? Is there even an answer to what a self-driving car should do? Now researchers at MIT, the Massachusetts Institute of Technology, further revived this moral quandary back in 2014. They created a website they called the Moral Machine, and through that website, respondents around the world were asked to decide in various self-driving vehicle scenarios, such as whether to kill an old man or an old woman, an old woman or a young girl, the car passenger or pedestrians, and many other similar questions. Since its launch the experiment has generated millions of decisions, and analysis of the data was presented in a paper in the scientific journal Nature in 2018. The study sparked a lot of debate about ethics in technology, which is the purpose of this course. So given that, we’d like to ask you a few questions. One, who should you trust? Should we trust AI? Or should we trust humans? Two, who’s responsible if something bad happens? So in the context of an autonomous driven vehicle, is the car manufacturer responsible? Is the software programmer responsible? Or another stakeholder? And third, culture. What is the role of culture in all of this? Let’s consider these questions together. Notes on AI and the Trolley Problem:
It is important to note that the trolley problem is fundamentally about showing how we process information and to highlight blind spots in our decision-making. Doing so hopefully helps us improve our choices by demonstrating the need of our morality and our sense of responsibility to humanity in our decision-making. And to the extent we think that morality, emotion, and humanity are important and worth developing, you could say that by linking AI and driverless cars to the trolley problem, we may be doing the opposite of what was intended and missing the point altogether, possibly to our mutual disadvantage. We should be wary that we are making the whole conversation less proximate.
Additional Readings Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J. F., & Rahwan, I. (2018). The Moral Machine Experiment. Nature , 563, 59–64. Retrieved from https://www.nature.com/articles/s41586-018-0637-6 (paywall) Huang, E. (2018). The East and West Have Very Different Ideas On Who To Save In A Self-Driving Car Accident. Quartz . Retrieved from https://qz.com/1447109/how-east-and-west-differ-on-whom-a-self-driving-car-should-save/ Hao, K. (2019). Giving Algorithms a Sense of Uncertainty Could Make Them More Ethical. MIT Technology Review . Retrieved from https://www.technologyreview.com/s/612764/giving-algorithms-a-sense-of-uncertainty-could-make-them-more-ethical/ 4.3.2 AI and the Trolley Problem: Trust and Proximity
So, let’s talk about trust. Now, Dave, you’ve been a passenger in a car that I’ve driven before. – I have. – So, who do you trust more, me or the autonomous-driven vehicle. Well, as much as– – Tough question, I know. – No, so as good of a driver as you are, the reality is, well I don’t know. I guess this is the thing that is so disconcerting for a lot of people. I think when I’m in your car, I trust you. If I was the one that was driving, I would certainly trust myself, right? But I think for a lot of people, we just have a question of this completely autonomous, non-human actor, and not singular, either, like potentially thousands of these non-human actors that are gonna be out there with these large vehicles roaming around. The reality is, I think, that I would probably want to trust you more because you’re my friend, and I know you, but I think empirically, I believe that it is probably a lot safer with a host of autonomous vehicles that are out there. – Yeah, I think you’re right. I think a lot of the research as we have it now demonstrates and leads to the fact that overall, things will probably be safer as more and more autonomous vehicles are on the road. But why do so many people, or why do you think so many people are so resistant to that? – Well, I mean, I think I’m going to flip it around and ask you, right, ’cause this trust is a big part of what we’re talking about, and the fundamental, I guess, foundation for so much of this ethics conversation. And, do we over-trust ourselves? Do we under-trust technology, or is it the other way around? Like, are we rushing so quickly into these technologies without really understanding whether or not we should be trusting them. – Yes, so I think two things off the top of my head. One is, you know, as humans, we tend not to trust things that we don’t understand. – Right. – Right. And so, I think that plays a lot into it of hey, I don’t understand how this exactly works, so I’m gonna distance myself from this, or I’m gonna be suspicious of it until I do understand how it works. I think there’s a lot of that. Two, I think this idea of over-confidence, right. As again, as humans, we tend to be more over-confident of our own ability than we probably should be. There’s been tons of tests that have been done where that’s been shown, right? And I think the combination of those two things of hey, I’m actually not that bad of a driver anyway, so it should be okay, plus I don’t understand what’s going on in this car with no driver. Those two things kind of collide, I think, amongst all, amongst humanity or mankind to kind of create this situation where hey, maybe I’m resistant to this change. – Yeah, so from the perspective of trust, I think you kind of hit on what we discussed earlier from a cultural-lag perspective, right? For a lot of people, they’re gonna be very comfortable kind of continuing in that perceived safe method of travel, when in actual fact, the numbers maybe don’t bear that out, and they’d be willing to persist with a situation where they’re the driver rather than going into a potentially safer autonomous vehicle. And I think this gets to another interesting point that is often a criticism of the trolley problem in the first place is that it presents this binary, almost illogical situation where you have to choose between one person dying or five people dying or some really fantastic situation when that’s probably not the case at all. – Hey, it’s not reflective of real life. – Sure, they’re not reflective of real life, and so, I guess my question for you, from an autonomous-driving-vehicle perspective and AI perspective, I think one of the real reasons why people in this industry are saying you should trust autonomous vehicles more is because they can communicate with each other kind of seamlessly and simultaneously. What’s your perspective on that? I mean is that kind of how it would work, and how would that potentially make things better, safer, smoother? – So I think that’s a really cool question for at least, again, two reasons off the top of my head. I think historically when you look at the most earliest versions of this kind of autonomous driving, there’s an idea that actually these vehicles would not be independent. They would somehow be in sync with each other to make driving much more efficient, so I think the more advanced forms of this autonomous driving will be exactly what you are talking about, these kind of a linked network of vehicles that collectively will be able to gauge risk, and overall, holistically make things maybe more safer. So I think there’s definitely that component that exists. I think the second thing that ties into what you’re talking about with respect to this binary kind of, this false dichotomy, right? – Yep. – It’s like a binary code. It’s either zero or there’s a one. – Yeah, there’s lots of information, it’s not just either or. – Exactly, and if you talk to people who operate in this space, either an auto manufacturer who’s trying to go into autonomous vehicles, or on the software side, people who are developing the software, you know, almost uniformly, they will tell you that it’s never binary. – Yeah. – It’s always multiple, different outcomes and things that can happen, and you know, kind of based on what we discussed just a few minutes before about AI and machine learning and deep learning, and this idea that these systems will go through multiple permutations on all, based on the data that’s being inputted, and historical data as well as data that’s coming in live. Then, they look at what the different outcomes will be, and so what that tells us is that there will probably be multiple outcomes anyway, which is more reflective of reality, which gets to the criticism that a lot of people have about trolley. Additional Readings Sage, A., Bellon, T., & Carey, N. (2018). Self-driving car industry confronts trust issues after Uber crash. Reuters . Retrieved from https://www.reuters.com/article/us-autos-selfdriving-uber-trust/self-driving-car-industry-confronts-trust-issues-after-uber-crash-idUSKBN1GY15F Kaur, K., & Rampersad, G. (2018). Trust in Driverless Cars: Investigating Key Factors Influencing the Adoption of Driverless Cars. Journal of Engineering and Technology Management , 48, 87-96. Retrieved from https://doi.org/10.1016/j.jengtecman.2018.04.006 Verger, R. (2019). What will it take for humans to trust self-driving cars? Popular Science . Retrieved from https://www.popsci.com/humans-trust-self-driving-cars Baram, M. (2018). Why the Trolley Dilemma for Safe Self-Driving Cars is Flawed. FastCompany. Retrieved from https://www.fastcompany.com/90308968/why-the-trolley-dilemma-is-a-terrible-model-for-trying-to-make-self-driving-cars-safer 4.3.3 AI and the Trolley Problem: Cultural Lag
I’m curious to hear what you think in terms of a lot of, again thinking of cultural lag and thinking of how we could implement these things. There are certain jurisdictions that are way out front in terms of trying to establish the physical landscape that would allow these systems to take place. So, one of the more famous ones would be certain parts of Arizona for example in the United States, right? And they’re trying to make sure the physical infrastructure is there to kind of speed through that cultural lag but it is also interesting the juxtaposition of a lot of anytime an autonomous vehicle hits someone, or hits anything, it’s global news, right? So what is that juxtaposition and why is it news just because there’s an accident? – One, I think the nature of media now is that people want to see headlines, right? And so I think there’s something exotic still about, “Hey, AI, even though or autonomous driven vehicle, even though statistically you’re probably still safer, even now, than the average driver.” – Yeah, ’cause they’ll be quick to point out, these cars have been driving around for thousands of hours– – Miles, right. – Yeah, thousands of miles. – Kilometres, yeah. – Yeah, exactly, yeah, yeah. – And we don’t report on every accident that happens. – Right, or non-accident. – Or non-accident that happens. Yet if a vehicle that’s been driven so long, but has one accident just because it’s autonomously driven, now it’s an issue. So, I think there’s a bit of a media frenzy around autonomous driven vehicles, partly because it’s a bit sexy right now and partly because people are not sure what it is and what’s gonna happen. I do think the Arizona example is interesting because definitely there are pockets of geographies in different places in the world so in the United States, you mentioned Arizona but if you go to Silicon Valley now, you see Google Waymo vans everywhere, right? – Yeah, on the Google campus, yeah, yeah, yeah. – And so, there’s a bit of that. I think outside of the United States, one place that’s really interesting or been at the forefront of this is Japan, which has instituted at the national level, series of legislation to allow autonomous driven vehicles and even trucks in the next few years. – Yeah. And so they’re quickly trying to build up the technological infrastructure as well as the physical infrastructure to allow these kind of vehicles to operate more effectively and efficiently. – Yeah. – And so I think that’s an important piece and I think once you have kind of national and local leaders behind it then the regulatory landscape will change pretty rapidly around that and once that happens and insurance will change, kind of ideas of liability would change and those kind of process will start developing. – So let’s talk about that for a minute. So again, for all of you out there. Just think for a moment. Let’s say that, talking about, again, the changing technology and then how culture and therefore laws and things have to change and catch up to it. We’re talking about regulation. Who would be responsible if there was an accident? So think about that for a moment. Let’s say you’re walking down the street. You’re crossing the street and all the sudden, an autonomous Uber or delivery vehicle. Maybe a Google bus or something cuts you off and ends up knocking you down causing some injury. Think about it for a minute. Like who would or should be responsible for that? – Well, maybe what you said as a starting point to think about now if it was just a normally driven vehicle without the autonomous power software driving the vehicle. Who would be responsible? And we will kind of go through very typical kind of legal analysis. Insurance people would be involved. Police officer will show up and do a police report and there would probably attribute some negligence to the driver or to maybe if you were jaywalking. – Yeah, different people. – And there would be different people. I think that would be the initial starting point for similar just because a vehicle is involved in an accident that has no driver. – Yeah. It doesn’t change the entire dynamic. – It doesn’t change that entire dynamic. Exactly. Now, I guess that goes to a more fundamental question though about, let’s say there’s inherently wrong with the software or with the vehicle itself, the autonomous driven vehicle. Then who would be responsible? Would it be the software programmer? The developer who created the AI software or his company? Or would it be the car manufacturer that actually owns or manufacture the vehicle? Or would it be the owner of the vehicle who’s not even driving. – Yeah. – But they actually own maybe a fleet of these vehicles. – It does make it difficult though. I mean although it doesn’t change things entirely, there is one big missing component there. It’s the driver. Right, so currently, under tort law, almost everywhere in the world, if a car strikes someone then the driver is almost universally gonna be responsible so it certainly does limit the number of people that could potentially be responsible. – Yeah, so I think you’re right. Overwhelmingly, if a driver has an accident with the pedestrian, almost in most situations, the driver is gonna be held responsible for that. I think the proxy for that moving forward, autonomous vehicles would be who owns the vehicle? Now, the thing that would be really interesting I think is the next iteration. Now, there’s a next, the next version as it advances, the idea of owning a vehicle is vastly different from before. – Yeah, yeah. – It maybe owned collectively by a neighbourhood or by– – Or it could be a utility like electricity. – Exactly or it could be utility or just by a company like that has a fleet of taxis but similarly, we’ll just have a few. And so depending on how these assets then are owned, the idea of ownership will also become very interesting and how you hold those people accountable. – That is why I hold a little bit of concern in this regard because typically, the bigger the actor is, the more challenging it is as an individual, someone who’s injured, the more challenging it becomes for you to seek, redress and to recover any type of damages from that. So for example, if it’s you versus Uber, that’s a significantly– – Power dynamic is very skewed. – Very different power dynamic than if it was me versus you, let’s say, right? Additional Readings 4.3.4 AI and the Trolley Problem: Cultural Differences and Biases
Okay, so we’ve covered trust and responsibility and how the challenge of getting these things going from a cultural lag perspective. But I thought one of the most interesting things to come out of this, especially from the MIT study in particular, was the way various elements of culture and perhaps bias, kinda came out and the potential programming implications from an AI perspective. Can you talk about that for a minute? – Yeah, so I think that’s what got picked up by by media the most. – Everybody was talking about – Everybody was talking about the cultural implications of what this Moral Machine, the data that came out of these basically surveys that people were doing. Effectively how different cultures, or at least the way it was painted, was how different cultures prioritise life in a sense. If that life was in a car with you, it could be more valuable in a certain cultural context than the life that’s outside the car, that potentially you’re hitting. And so how do you try to protect one life over the other? And so, you know, that’s a simplistic explanation but you know, there was a lot of very interesting takeaways that people had. They found that Chinese respondents were more likely to choose hitting pedestrians on the street, instead of putting car’s passengers in danger. And we’re more likely to spare the old over the young. Western countries or people from western countries tended to prefer inaction, letting the car continue its path. So, kind of like inertia. While Latin Americans preferred to save the young. – Okay, so I wasn’t that surprised when I saw the results from the MIT study and it showed that Asians for example, were more likely to preserve the life of elderly at the expense of the young, as an example. I think, having been in Asia for the past 20+ years, many cultures here have a reverence for the elderly. – Deference – Deference at least, yeah. And so, I think there were certain things like that, that maybe weren’t that surprised, and fit certain cultural stereotypes that have, I think been around for a long time. But I guess that the bigger question is not that these cultural preferences existed, but then what should we do with them as a result. Especially when programming, FinTech, and things in the future, right. I think, you and I both know that from one of the challenges that lawmakers, or ethicists like us, or companies as they’re trying to create a moral code for their employees. One of the challenges that they have is trying to create a moral code that permeates culture and goes across country lines, right. So, is it possible, or should it even be a goal from an AI and technology standpoint for us to create a uniform sense of morality? – Yeah, that’s a really important question to be honest. So I think if we take one step before we get even to the technology. I think the example you gave of, let’s say you have a Western company that does business all over the world – Yeah. Asia, Africa, the Middle East– – They write a code of conduct in California. – Yeah– – So then they have to apply it everywhere. – And now they want to make it universal. – Yeah, yeah. – But potentially what is right in their initial cultural context may actually be questionable in a different cultural context. – Or, still perhaps “right” from a legal or moral perceptive, but communicated in a way that doesn’t resonate with local people. – Sure. And so, there are a lot of implementation challenges to say the least, when companies try to embark on this kind of initiative. So, if we transport that into AI and just technology in general you know, what comes to mind here is, automobiles or the automobile industry is a global industry, right. We have car manufacturers in China, in Japan, in Korea, in the United States, and a whole host of other places. And so, the programmer who sits somewhere in Asia with a certain cultural context that is programming a particular type of AI into a vehicle. With some of the results potentially from the MIT study and let’s say that vehicle is then imported or shipped to the United States, and that particular cultural context bleeds into how that vehicle operates in a different cultural context. – Right. – And then how does that vehicle have with its particular culture, influence, on the road with other vehicles that have a different cultural influence. How do those all interact? I think that’s a really fascinating and important question. It’s a microcosm of a greater host of challenges that AI will bring to the forefront the type of things that we need to discuss as a society. – Yeah, and so, again, foreshadowing a little bit, but also, revisiting the very first module. Really, while these are interesting practical challenges that all of us have to consider as we enter into this kind of new wave of the Fourth Industrial Revolution. We haven’t even touched upon the most critical of these issues. The idea that one of the most common forms of work globally would be drivers. – Drivers. Certainly within the US and other places within China, et cetera. There are so many millions, and millions, and millions of drivers around the world, and so this kinda leads into more fundamental, systemic, social questions of if we remove these from the equation how do we then reintegrate them into the workforce. How do we ensure that society is able to absorb those people, provide them not only jobs, but a sense of well-being. And that’s something that we’re gonna be considering in the next few modules. Additional Readings 4.4.1 Data and Models
Since data is so critical to AI as well as to many of the other technologies that underpin FinTech, it is important that not only the right data is being used but also ensuring such data is not biased. The phrase “garbage in, garbage out” has probably never been more apt, nor as important, than when describing AI. And bias can find its way into AI in a few ways. Let’s take a simple example. If a computer’s model is using data that is already contaminated by some level of discrimination then the output will also inevitably be prejudiced. So say for instance, your AI relies on data from apartheid-era South Africa, well, chances are that data incorporates the wide-spread racist policies that existed at that time. Obviously, this would lead to less than ideal outcomes. And even assuming your data is free of such explicit bias, there are other ways for bias to possibly creep in to artificial intelligence. For example, cultural bias and norms can inadvertently be programmed into AI because a programmer from one culture might value some characteristic differently than a programmer from another part of the world. We’ll explore this a bit further when we revisit the trolley problem. There are other potential issues that also relate to bias. AI is driven by algorithms and models. In her thought-provoking book, Weapons of Math Destruction, or what Harvard-trained mathematician, Cathy O’Neil refers to as “WMDs”, she identifies three characteristics of a possibly dangerous model. So the first characteristic of a dangerous model, is that the model is opaque and not transparent. So this would be if the system is what we call, a blackbox, And it’s difficult for those from the outside to be able to really understand what is going on behind the scenes. The second is that, the model is potentially scalable and can be used broadly or across large populations. Now of course, this has been a key component to what we’ve talked about thus far. The issue with a lot of these AI and other forms of technology is that they can scale beyond anything that we’ve seen before. And the third aspect is that, the model would potentially be unfair in a way that would negatively impact or even destroy people’s lives. So for example, if AI was being used from a FinTech context, to determine who could get a mortgage to purchase a home, who has access to credit, etc., etc., These would be things that could have a significant negative impact if someone was not granted access to them. So despite all the good that will certainly accompany the rise of AI, it’s also pretty clear that biased data in conjunction with possibly suspect models have the potential to create more risk, unfairness, and inequality, which is why it’s important to be aware of their impact and invest time thinking about how to prevent such problems, now, before the technology is fully mature and really permeates our lives. So in the next few cases, we’re going to look at some of these warning signs in real life scenarios. Additional Readings 4.4.2 Mortgage Application
In the last section we explored AI, particularly in relation to autonomous vehicles, and considered really important topics around trust, accountability and the impact of culture. Next, we will look into AI bias, specifically in the context of assisting human decision-making. Often, when we think of AI or algorithms, we think of something impartial and neutral, something that is simply acting based on pure facts. And this is one of the reasons why we humans have began using AI to help us with more subjective evaluations and decisions. If we can remove human error from decision-making, that would lead to a more just and better world, right? But the reality tends to be that algorithms are not as neutral as many have come to hope. This is because of bias that gets programmed in, because of cultural bias from the programmer, or from historical bias, that is somehow prejudiced in a certain way. Google AI chief, John Giannandrea, has said that his main concern regarding AI does not revolve around killer AI robots or Terminator sorts of things, but instead he is worried about the biases that he says, “that may be hidden inside algorithms used to make millions of decisions every minute”. So, first of all, what do we actually mean when we say that AI or an algorithm is biased? If you recall our talk about machine learning, a vital part of that revolves around the training of AI. Training to see and follow patterns by feeding it large amounts of information and data, training it to understand what success looks like, fine-tuning the results and reiterating, and so forth. And in this process there is the possibility of human errors and prejudice integrating itself into the algorithm, Let’s take a look at another example. In the past, if you were about to buy a home, you would typically meet in person with a mortgage officer at your local bank, probably. You would visit their workplace, have a chat, provide any relevant documentation, this person would then review your documentation and later they give you a result of whether the bank was going to lend you money or not. For the lending officer, this would typically be a fairly subjective exercise. Because the majority of home loan applicants fall in some level of a grey area where there’s no definitive “yes or no” with respect to loans so they have some discretion. So, with the recent advent of more advanced algorithms and to increase efficiency, this process has been simplified for many banks, where the decision-making is now, to some extent, outsourced to AI, which makes the recommended loan application decision. By doing so, this process should be more accurate, objective and fair, right? Well, not always, Amongst many studies that have been done, in particular, a recent study by the University of California found strong bias and discrimination by these “AI lenders”, such as charging 11 to 17 percent higher interest rates to African American and Latino borrowers. Additionally, minority applicants are more likely to be rejected than white applicants with a similar credit profile. Now, lending discrimination is not something new, and has been reported on a lot in the past. So Washington Post, for another US example, uncovered widespread lending discrimination back in 1993, where they showed how various home lending policies were negatively impacting minority residents. What further complicates the problem around AI bias, is what we people refer to as blackbox algorithms. This is something similar to what we discussed earlier about opaque models, lacking transparency. And really, private companies are generally hesitant to open the door for other people to scrutinize what they’ve been doing. So how do we make an inclusive algorithm, when the data, its developers and the organizations who hire them are seemingly not diverse and inclusive? Overall, while algorithms are helpful, they may not make things as fair as we ideally would have hoped for. And we therefore have to be careful in blindly applying them – especially since they have a tendency to repeat past practices, repeat patterns, and automate the status quo. Additional Readings Knight. W. (2017). Forget Killer Robots – Bias Is the Real AI Danger. MIT Technology Review . Retrieved from https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/ West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute . Retrieved from https://ainowinstitute.org/discriminatingsystems.html Brenner, J. G., & Spayd, L. (1993). A Pattern of Bias in Mortgage Loans. The Washington Post . Retrieved from https://www.washingtonpost.com/archive/politics/1993/06/06/a-pattern-of-bias-in-mortgage-loans/d04bcb29-d97b-44b5-b4e0-93db269f8f84/ Counts, L. (2018). Minority Homebuyers Face Widespread Statistical Lending Discrimination, Study Finds. Berkeley Haas . Retrieved from http://newsroom.haas.berkeley.edu/minority-homebuyers-face-widespread-statistical-lending-discrimination-study-finds/ Hao, K. (2018). Can you make an AI that isn’t ableist? MIT Technology Review . Retrieved from https://www.technologyreview.com/s/612489/can-you-make-an-ai-that-isnt-ableist/ 4.4.3 Mortgage Application – Trust
So let’s think about a question. Imagine that you went to a bank and you applied for a financial product, like a loan for a home. And you submitted all the paperwork, it was processed by the bank and a few days later you were rejected. And you went back to the loan officer and asked why. And they said hey, our AI decision-making software screened and scanned your application and said unfortunately, no. What would you do? Dave, what would you do? – It’s tricky, right? I mean, it’s already hard enough to communicate with banks as it is and now they’re moving it into this completely amoral space where, essentially this software is gonna be making a decision. And I mean, I’m not really sure you would have a recourse, would you? Like, they’re not gonna give you access to the algorithm, they’re not gonna show you probably exactly why and it just seems like it would be one step further away from, kind of, a balanced negotiation between you and the service provider, right? – Yeah and so I think, I think that’s a good point and the idea of not having recourse is really key. Because I think it raises really fundamental questions about what’s fair, right, because if that discrimination, well, if let’s say your rejection by the bank was based on some level of latent discrimination based on biassed data or other forms of bias that may exist in that AI process then there’s some issues of fairness if you can’t go rectify that. – Yeah, well why is it so critical that banks, or really even we more broadly, think of these questions now? Especially like something like bias. I mean isn’t that something that should come out later on? – I think what we’re finding already is that the longer we wait the more difficult it will be to implement, kind of, cleaner AI that has, you know, more cleaner or filtered data. And partly because data compounds. You know, there is troves of data being produced every day and if we’re not aware of the influence of how that’s compounding and the negative inputs that are already there then that potentially becomes a problem. Particularly since a lot of that data’s already based on historical data we know incorporates bias that existed because, you know, societal norms were different in the 1960s or 1970s versus what they are today. – Well can’t they just clean it up? Like can’t we just make it neutral somehow? – Well I think in certain cases you could perhaps be able to do that. But in a lot of the situations what that would do is, in the process of cleaning that up, perhaps other factors of the data, that you may need to rely on, also get influenced. So that data’s also not clear anymore. And so this then become a little bit of a catch-22. Fixing one problem creates another problem. – Yeah, okay, so where do you fall on this line? So let’s say we know that humans are very imperfect and we know that most of the bias that we have in the data is there because humans are biased and they will discriminate for race or gender or nationality and for a whole host of other reasons. And so, on the one hand, we clearly do not have a perfect track record. But on the flipside we are now entering into, potentially entering into this area of complete amorality that’s going to be built on the back of existing historical data and could introduce a whole new set of bias, or even worse, entrench existing bias into these decision-making processes. So what would you trust more? Do you trust the kind of human bias that’s inherent in the existing systems or do you trust the potential bias in these data sets and AI that’s gonna be coming, you know, in the next few decades? Additional Readings 4.4.4 Mortgage Application – Accountability
Well, I don’t know, Dave, that’s a difficult question. I think maybe if we go back to just a basic framework that we talked about earlier, about models that potentially are dangerous. And we talked about is the model opaque and not transparent? Does it have the possibility to scale and potentially be used by large amounts of people? And is the harm or potential damage that the model causes pretty substantive or substantial? And I think in the example of a home loan, that definitely applies to all three. Banks definitely will not open up their AI model, or decision-making model, to tell you how– – Not willingly. – Not willingly, how they made a decision. So, that would be quite rare. Secondly, the scalability of this is quite large. So, you can imagine some of the largest banks in the world with thousands or hundreds of thousands of clients, the impact or the scalability would be quite large. And then lastly, for each of those individual customers, the impact on their life could be huge. – Huge, yeah. – The difference between having a home and not having a home, what could be more fundamental to a person’s well-being, or idea of psychological kind of stability, than the opportunity, when they’re ready, to purchase a home? So, these are really fundamental things that I think we have to think about. Now, going back to the idea of recourse and the balance of do we trust the human, even though we have bias? Or do we trust the AI, even though that also has some level of bias? I think it’s a bit of a mix and I think we need both. I don’t think we can completely do away with the human element. – Right. – And rely completely on the AI, but we can’t go the other way, overboard the other way either. And I think in our seeking of efficiency of how we use AI, one of the ways we, one of the big draws is that it will hopefully make us more efficient, right? Some of the repetitive tasks and the things that take up a lot of our time, maybe we won’t have to do those anymore. But in our pursuit of this greater efficiency, I do think we still need to sacrifice a little of efficiency to keep a human element there. So, when we go back to the example, do you have any recourse? Well, what would be great is, if banks continue to have somebody there that a rejected applicant can go to and say, hey, I got rejected. I just wanna understand why. And then you could have somebody there to explain the process and potentially follow up to see if things were interpreted incorrectly. I think that would be the best of both worlds. Now, of course not a lot of organisations may be willing to do that, but I think there will be organisations that will be willing to do that. Particularly as we have these debates about how to balance this responsibility and this trust between the different stakeholders involved. – Yeah, and you say that banks or financial institutions wouldn’t willingly allow people to see their algorithm or other kind of inside data. I completely agree that they wouldn’t willingly do that. But I wonder if transparency really is the key to ensuring these things. And they do say sunlight is the best cure from the ethics perspective. I wonder if that is the eventual future of this. If we’re going to rely so heavily on these products, if we’re going to rely so heavily from a financial, entire industry perspective, I wonder if the eventual step would be, just like a patent, where you’re granted a patent but in return, you have to provide very public data on the creation and various components of that particular device. I wonder if that means that eventually, if you hit these three things, if it’s not transparent, if it’s really scalable, and if the potential for harm is very significant, I wonder if there would be either a public or even kind of private governmental disclosure required to show that there isn’t bias within the system? – Yeah, and I think that’s a really interesting point. And from a broader level, some people are talking about this in the context of large technology companies that have grown so much and become such a part of our lives. And maybe those companies shouldn’t just be considered just a normal company. – Right. – Because they’re so influential, but maybe we should regulate them like a financial company or even a public utility. – Even a utility. – That’s right. – And so, I think that’s part of the broader debate we’re having as a society to understand how we want to manage the influence, the increasing influence of these companies in our lives. – Okay, so that’s something to think about. From the standpoint of AI and artificial intelligence as these things become more and more ubiquitous and utilised around us all the time, is to think about how are they making life more transparent, more efficient and more unbiased? Or are they actually entrenching existing biases and therefore kind of further distancing certain segments of society from the financial markets and from financial inclusion? Additional Readings 4.5.1 Social Credit
Let’s look at another example and talk a little about credit, a pillar of the modern financial system. For many people in the world, credit is part of everyday life, ranging from credit cards to borrowing money from a bank to buy a home. For many, the ability to access and use credit is largely defined by a credit score, which ultimately gauges how likely a particular person is to repay the money they have borrowed. Conceptually, the better the score, the lower the risk. So in the United States, we have something called a FICO score, named after its creators, Bill Fair and Earl Isaac, who created the Fair Isaac Corporation, which initially produced these scores. To calculate a FICO score, different financial data such as, bank account information, existing debt levels, payment history, and other related information are used together to calculate a credit score. Many other countries now also have their own versions of these scores. In theory, the use of these scores is important because individuals can more freely access capital and other financial products since banks and financial institutions are more willing to lend money and likely at lower interest rates because they have this credit information. So a mature credit system makes accessing capital easier. And for many in Hong Kong, or the UK, or other countries with developed financial systems, the notion and use of credit is quite mature, a given, really, almost an afterthought. But what if there was no credit score for a financial institution or bank to assess your risk when you needed to borrow money? How might that impact you? Well, that bank may require you to pay a really high interest rate or pledge a lot of collateral, even for a small loan, or they might even require both. It was due to such challenges that microfinance lending organizations, like the Grameen Bank, founded by Muhammad Yunus, were formed. Now the issue of credit really becomes apparent when you consider there are approximately 2 billion people in the world that are unbanked. So this basically means, roughly 25% of the world’s population doesn’t have a bank account. Without access to the financial system, which for most people in the world is through a bank, then of course it’s extremely difficult to develop a credit history and a credit score. The lack of this information makes it difficult for the unbanked to access credit, which means to borrow money, leaving many mired in the same financial situation. So the exciting thing is that FinTech paired with mobile technology can help solve this conundrum. With the rise of mobile phones, and particularly smartphones, and the shift to digital banking, there’s a lot of opportunity. So for many of today’s unbanked, most of them may never, or at least rarely, access a traditional brick and mortar bank, but increasingly many will patron digital banks, even online-only banks, and other digital financial services via their mobile device. This is, and will be, incredibly empowering for many of the world’s neediest populations, and one of the great potential democratizing aspects for FinTech – giving people more opportunities. Now, for people that may be using mobile devices but still not yet fully integrated into the financial system, or with only minimal financial data, there is still the problem of trying to determine their credit. So one alternative to traditional forms of credit analysis is the rise of social credit. In its simplest form, social credit basically means that any kind of data, not just financial, can possibly be used to determine some level of credit. For example, your Facebook network, and your relationships there, the type of people you most frequently message on your phone, or the amount of time you spend watching Taylor Swift videos on your phone, and a whole host of other behavioral and relationship knowledge, that is not necessarily financial, can be utilized by AI-backed algorithms to compile a profile on you – a social credit profile – that may have an impact on your financial and social life. Sounds fascinating, but is this okay? What are the benefits? What are the risks? Aspects of social credit are being rolled out in various ways already. At a national level, China is implementing its own indigenous social credit system, a reputational score system that applies to individuals and companies, with the intention for it to eventually score all of its citizens when the system is fully developed. The early stages of this social credit system have already garnered attention as almost 10 million people have been banned from domestic air travel in China alone, and this is all based on their social credit score. Other potential impact includes, limiting access to certain educational opportunities, or employment. And social credit scores could even impact one’s Internet speed. It’s not just nation-states, but really private sector actors are leading the charge. Ant Financial, one of the world’s largest FinTech companies, and related to Chinese technology giant Alibaba, has also started developing its own form of alternative credit, dubbed “Sesame Credit.” In addition to traditional financial information that something like a FICO score might include, Sesame Credit also incorporates other information like the online behavior of a person, especially in the context of their activity within the Alibaba ecosystem. A high Sesame Credit score, improves the users’ “trust” level within the system and facilitates access to Ant Financial products. But you know, China is not the only place where social credit analysis is growing. Even in Silicon Valley, you can observe aspects of social credit. Dealing with myriad issues related to fake news claims, Facebook has developed its own rating system to gauge the reliability and trust of its users. And one criticism, however of this is, that even if such a tool might be necessary, it’s not transparent. And this is something we’ve discussed before about this idea of transparency. The use of social credit will continue to expand, either as a direct proxy or at the very least a supplement to traditional financial credit. Maybe nowhere is this more apparent than in the peer-to-peer (aka. P2P) lending market, which is another important part of the FinTech landscape. Many P2P platforms incorporate some aspect of social credit in their models. For example, one of the larger P2P platforms, Lending Club, which is listed on the New York Stock Exchange, was originally an application on Facebook that spun off. Prior to its IPO in 2014, Lending Club frequently mentioned that social relationships were an important part of its model and that social affinity and other non-financial factors helped lower the risk of non-payment. As P2P platforms grow, more data becomes available, and AI capability enhances, it will be interesting and important to consider how social credit will be used in the future to influence our lives. Additional Readings 4.5.2 Social Credit – Subjectivity of Morality
So revisiting our earlier example about purchasing a home. Imagine, you go to a bank to get a home loan. And in addition to the financial information you give to the bank to evaluate your credit worthiness, they ask you for social information about your behaviours on your phone and your computer, what website you frequent, what kind of games you play, for how long, what kind of music videos you watch on YouTube? How would that make you feel? And how would that impact how you live your life on a daily basis? But Dave, how would you feel, if this was the situation you were in? – This is, I think, to be perfectly blunt, I think it’s kinda scary and it’s something that I don’t really get caught up in a lot of the more dystopian future of AI and stuff. I feel like we’re probably a long way off from that. But this is one area from a behavioural modification standpoint, I feel like there are pretty concrete examples historically, not even that long ago where broad scale social change through behaviour modification especially when looking at peer groups, family members, educational history, religious or other beliefs that lead to fairly broad dire consequences. So, let’s start with the good stuff. Let’s not be too negative. Some of the most successful example of this in Africa, in developing parts of Asia is a really really simple aspect of social credit which would be whether or not you pay your mobile phone bill. If you are in Kenya and you’re utilising one of the mobile banking payment platforms and you don’t have a bank account, whether or not you pay, your mobile phone bill each month is probably the best example of your credit worthiness. – So how likely you’ll repay. – Exactly. – Because you pay your phone bill for the last year. – I thought when that came out, I thought it was brilliant and I think you know, millions and millions of people have benefited from that aspect of social credit scoring. On the flipside, if you look at some of the other examples of this where they’ll look at browser history. They’ll look at how many hours you spend each week playing video games. They’ll look at what I would consider more moral decision making, that’s the type of stuff that concerns me I think. – Okay and so, is the concern about, when we think about morality, frequently people think about who gets to decide what’s moral or not? Is that the concern you’re referring to? – That’s exactly right. I mean think about it. If you’re home right, who is to say whether your behaviour is specifically good or specifically bad especially when we’re talking about accessing credit. Some of the examples they gave is playing video games is bad and so therefore people that play a lot of video games should be less worthy of credit. Even if I agreed with that on a personal level which I don’t necessarily, it’s very dangerous to think that a small group of people, probably men who we don’t really know who they are or what they’re discussing, they’re going to be the ones to determine what is moral and therefore what is acceptable in society and as we’ve already discussed, this can have extremely broadly implications in terms of whether or not you can buy a house or whether you can get a visa to travel outside the country or in some cases even determining what types of majors you can have, what types of careers you can enter into. – And I think returning to something you didn’t mention about the potential risk of the impact. We’ll start modifying how people act and behave. I think it’s really important. You know, philosophers from a long time ago to more modern philosophers have talked about this idea of what observation does to people’s behaviour. Even though nobody is coming and compelling you physically to do something, the fact that you feel like you’re being watched even you may not be being watched but the fact that you think you’re being watched, actually starts shaping your behaviour. And that is a very interesting as well as scary proposition. – Potentially and again, now again, not to be too negative here because the reality is that you and I, we generally conform to the best aspects of human behaviour. That is why as a species, generally speaking, we get better and better. There is less violent crime right now. We tend to mirror the best elements of our humanity. But I’ll give you quick example, so my father one time when we were, well not once, he used to say this a lot as I was young. He would take me to go perform service within the community and I like many teenagers would go quite begrudgingly and I’d been, you know, complaining the whole way. And he would say to me “If you don’t want to do this, then this is not going to be something that counts as a benefit to you.” Meaning that I had to actually want to do it in order for it to be service that would benefit me kind of spiritually or you know, psychologically. And so this runs into the question of when you are trying to modify behaviour from ethics context, can you compel people into a certain type of behaviour and then make them good? Can you compel people into goodness or do you have to educate them and inspire them into goodness? – I see that’s interesting. So the idea goes to kind of internal inherent motivation that the person has in the action even though both people may be doing good things, we actually think maybe the motivation for doing the good thing sets them apart. – Yeah and if history is any example, when societies have tried to compel a good type of moral behaviour, that often has led to really some of the most dire consequences socially speaking because people will not feel that inherent sense of shame or morality in their decision-making and instead they often look to avoid those things and then often become very disassociated at society. And it can create some very significant perverse incentives. – Okay, is that similar to this idea of a checkbox morality in a sense of if I’m doing these things that are supposed to be good in society, I’m a good person where in fact, just because you’re checking the boxes may not mean that you’re actually a good person. – Yeah, well there is two aspects of it. One is the checking the box, so therefore feeling like I’m good as long as I tick the box, then I’m therefore a good person and then anything outside those okay. It’s justified because I’ve ticked the boxes. But the second one which is slightly more pernicious is the idea that we’re ticking the box to tick the box but we know that that is not necessarily what our true intent or true desire is and that’s when I think some of the more malicious stuff can come through, and again there is examples of this historically where you could have examples of genocide or significant inequality that’s perpetuated simply based on false definitions of morality, you know, just to give an example for those who are confused at home, what if based on my sense of morality, I believe that a particular minority race was not worthy of voting, not worthy of financial credit, was not allowed to own property, right? I could say that god has told me this is the right thing to do and that is my definition of morality when in actual fact you know, we hopefully as society would say that’s actually a pretty terrible thing. – Yeah and to your point that’s happened myriad times – Many many times, very recently. – Across the history of humanity right? A lot of that discrimination potentially was based on religion. Some of that was based on how we look, where we were born. – Political perspective. – Exactly. Additional Readings 4.5.3 Social Credit – Accountability
The other kind of concept that is interesting to me relating to social credit is, on one hand, you’re right. I think social credit has been incredibly enhancing for populations that can’t access traditional forms of financial credit. Thus, limiting them from accessing money, like you had talked about. – Yeah, yeah. – The thing that is a bit, not disturbing, but gives me pause about social credit is that we are using social credit as a proxy for financial credit, or financial data. – Yeah, yeah. – And whenever we use something that’s a proxy, generally, it’s rare that it’s one for one. So if we see a one here, and we wanna look at the proxy, that the proxy will also match up exactly. There’s usually some slippage, right? Or some parts that don’t overlap. – Yeah. – And what I’m afraid of is, if we inculcate the idea that using proxies are somehow the same thing as using the real thing, and that kind of leads into the ethos of how we think about AI and FinTech and these technologies, then we can really find ourselves in a situation where we assume that’s okay, but in reality, the proxy and the real thing actually don’t overlap that much. – Yeah. And then we become far, we move farther and farther away from actually what the real objective was. – Yeah, and I think, of course, that can be right. I think, on the flipside, there are a lot of examples where traditional credit scores have been shown to be problematic. False information, you know, identity theft, and anyone who’s had their identity stolen before knows how incredibly difficult it can be to clean up your credit. And so I think at the end of the day, what we’re saying is, social credit and other AI and machine learning-based credit rating systems can be incredibly, incredibly powerful, and can bring people to the financial markets that have never had access to it, but, like everything else that we’re talking about in this course, it requires those aspects of transparency, trust, proximity, to ensure we have the rules right up front, that we’re thinking about those things up front, so that we’re building a better system, and not entrenching these biases into existing systems. – And I think that is critical, what you just mentioned. Because what will happen, and what has already happened, even before the advent of AI and these other technologies is, if some sort of interesting process or non-AI technology came into existence, then it tended to get rolled out to other parts of the world. – Yeah. – And if a particular form of AI or some technologies that seems to be effective, then it’s very easy for that model, for that algorithm to start propagating into sectors and industries and geographies that it was never intended to do. – Absolutely. – But we just assume that it’s okay, because it worked well in California, or England, or Australia, or something like that. – Yeah, or a particular industry. Mortgages, use it for car loans, and stuff, yeah. – That’s right, and we assume that it will be a very easy transition across industries or sectors, where actually, that’s not necessarily the case, and in fact, there could be, cause more danger. So it goes back, this idea of scalability. Now we’re really scaling across the world, across geographies, across industries, and then across people, ultimately. – Yeah. Additional Readings 4.5.4 Social Credit – Privacy
We have another question for you. What are the implications of such systems? So Dave, what do you think? What are the implications of these social credit systems that we’re talking about that are backed and powered by AI? – Well okay so let’s go back to what we were talking about with say Taylor Swift and (Jacky) Cheung and these concerts right. – Oh always, that’s always an interesting topic. – The idea that for example what we talked about back then was from a security standpoint, the guy gets caught right? So that’s a good thing and one of the questions you asked me was, well, why should it be a big deal if the bad guy gets caught? You know we shouldn’t want him there in the first place. My response then is really how I relate to this now is, yes absolutely, we want to have as secure an environment as possible, but I think we at least have to ask at what cost? Right, the idea being that if we are granting this incredibly broad level of, not granting we don’t have any control over it, but if there’s this broad level of surveillance. If we have this mobile technology on us all the time, and if we are now going to be introducing social aspects of behaviour into not only, I mean really rating us, literally rating us. I think these are some things that we at least need to think about collectively as society to understand what are we giving up for this right? – Okay now, that’s an important question. That we all need to consider I think about. What is the sacrifice we’re willing to make to have this increased security. In the context of the concert that we talked about where this massive video surveillance. I think one compelling aspect of this is the AI is powering the video surveillance making it much more effective and now we have aspects of social credit about our behaviours. So I think in the future you could easily see a situation where those are paired. – Yeah. – As a broader way to surveil or control populations. – We might have higher or lower credit because you like Taylor Swift. – That’s right. – Right. – And that composite is created through many more data points now. Through surveillance that’s happening, where you’re frequent, how frequent you go to 7-Eleven. – Yeah. – On a particular day and particular time, right. Or how frequently do you stay out late at night? All these things are now going to be able to be captured more through observation, as well as through the actual data that you are creating through your own usage of various devices. – Yeah. Module 4 Conclusion
So, what does the future hold? Of course, I mean, no one really knows exactly. But what is certain is that AI is gonna be a big part of it. Famed inventor and noted futurist Ray Kurzweil, predicted a technological singularity will be reached in 2045. Such a singularity basically represents a future where AI powered super-human intelligence is so powerful that it will create even more innovative technologies, which could possibly lead to very new realities that change our assumptions about intelligence and perhaps the nature of our very existence. This could be the path to the utopian existence only portrayed in science fiction movies. Now that said, there are those, like Nick Bostrom, a philosopher at Oxford, and Director of the Future of Humanity Institute, and also author of Superintelligence, or Elon Musk, who most people know that have expressed serious reservations about the post-singularity level future. Even Stephen Hawking once mentioned, ”The development of full artificial intelligence could spell the end of the human race.” Wow, that sounds scary. Well, this doomsday scenario is largely motivated by the possibility and fear that AI may become so advanced that we may not be able to control it, and perhaps such AI will eventually want to manage and control us. Now additionally, there are other AI related concerns, many of which have already been touched on in this module, ranging from fairness to privacy to displaced labor. As an AI-dominated future becomes more imminent, communities of concerned technologists, lawmakers, and other interested parties are coming together to grapple with and define the ethical issues surrounding AI. This is happening in China with the formation of a national level AI ethics committee and other examples include the European Union’s High-Level Expert Group on AI, which has released its own guidelines on the ethics of AI, and even in the US, there’s an organization called The Partnership on AI, which is a collection of leading global companies and institutions working together, for a stated purpose of “shaping best practices, research, and public dialogue about AI’s benefits for people and society.” So what’s next? In his best-selling book, Zero to One, well-known Silicon Valley entrepreneur and investor, Peter Thiel wrote: “…humans are distinguished from other species by our ability to work miracles. We call these miracles technology.” AI makes the possibility of such miracles much more real. And ultimately, the future is not fixed, nor its outcome certain, and because of that –each of us, you and us, have the opportunity to shape the future. And hopefully this module has compelled you to further consider the parameters and maybe even limitations, that we need to place on these technologies that have the potential to do so much but potentially at great cost as well. Additional Readings Bostrom, N. (2014). Superintelligence: Paths, dangers, and strategies . Oxford: Oxford University Press. Hauer, J. (2016). The Funny Things Happening On the Way to Singularity. TechCrunch . Retrieved from https://techcrunch.com/2016/04/09/the-funny-things-happening-on-the-way-to-singularity/ Metz, C. (2018). Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots. The New York Times . Retrieved from https://www.nytimes.com/2018/06/09/technology/elon-musk-mark-zuckerberg-artificial-intelligence.html Metz, C., & Isaac, M. (2019). Facebook’s A.I. Whiz Now Faces the Task of Cleaning It Up. Sometimes That Brings Him to Tears. New York Times . Retrieved from https://www.nytimes.com/2019/05/17/technology/facebook-ai-schroepfer.html Hawking, S., Russel, S., Tegmark, M., & Wilczek, F. (2014). The Independent . Retrieved from https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html Ethics Guidelines for Trustworthy AI: High-Level Expert Group on Artificial Intelligence (2019). European Commission . Retrieved from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai Knight. W. (2019). Why Does Beijing Suddenly Care About AI Ethics? MIT Technology Review . Retrieved from https://www.technologyreview.com/s/613610/why-does-china-suddenly-care-about-ai-ethics-and-privacy/ Peter, S., et al. (2016). Artificial Intelligence and Life in 2030. One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel. Stanford University . Retrieved from http://ai100.stanford.edu/2016-report Araya, D. (2019). Who Will Lead in the Age of Artificial Intelligence? Brookings Institute . Retrieved from https://www.brookings.edu/blog/techtank/2019/02/26/who-will-lead-in-the-age-of-artificial-intelligence/ Module 4 Roundup
– Hi, welcome back. Module four’s round up. We’re excited this week for two reasons. One, as we mentioned last week, artificial intelligence is one of our favourite topics and there’s a a lot of implications there about the future, about society and a lot of things are really important to us. So hopefully, its been meaningful for you as you’ve gone through the module. But perhaps most importantly, we’re here in Hong Kong. David Bishop and I are teaching a class this week related to FinTech. And we were kindly joined by some of our great students to join us in this week’s roundup to discuss some of the questions that you’ve shared for us. So maybe David, you can kick us off with our first question – So as always we’ve loved the comments you guys have had really appreciate you sending them out there some of the comments you’ve had we kinda wanna throw out to our class. They’re from all over the world, really diverse group. And so the first comment that we had was about surveillance and really relates to some of the AI things we’ve been talking about this week. We’re constantly surveilled, there’s cameras everywhere, there’s ATMs on the street. So what do you think about that? What are some of your thoughts? Is it a little bit scary, is it better because it makes the world safer? What are your perspectives on this in terms of the utilisation of facial recognition and A.I. in our everyday lives? – Yeah so I guess it depends where you’re coming from in the world. If you’re in the States, I think people would be really scared and kind of be against this. If you’re from China, some views are that if you have nothing to hide, you have nothing to be scared of. But I think there’s two main things. The first thing, if you’re from the States in particular is your rights, your freedom. Some people have been seen in the news that they try to cover their faces when they see a camera and police officers actually force them to show their faces, – Right which I think is not right so to speak. – Yeah Another thing is these cameras in the public can be tampered with. I think they can be used as blackmail with police officers and law-enforcement. For high, high-level figures in the world. But there’s also pros with having it as well. – Yeah – So you know, uh, public-safety… – So you personally, how do you feel? – Personally, I think it’s okay I have nothing to hide. But I do feel that, ’cause I am from both Hong Kong and the United States that if I don’t wanna show my face on the camera I shouldn’t be forced to. – Yeah – But I don’t have an issue with it. – Yeah so for the class out there what he’s referencing is during our class here we actually showed a video, from London of all places, which kinda surprised us. Just from three weeks ago, where they had set up a police area outside on the street, and they were requiring everyone walking by to go through facial-recognition software and a gentleman didn’t want that he thought it was an invasion of his rights, so he pulled up his jacket over his face, he pulled his hat down and the police actually used that as probable cause to detain and question him. And so it’s kind of a catch- like lose-lose scenario. Either, you let them scan your face against your will or you run the risk of them using that as probable cause in order to question you. So Cameron’s saying that generally speaking, not a big deal if they scan us and stuff because it keeps us safer, nothing to hide. But, if somebody wants to hide their face, you think that’s perhaps what they should be able to do. Is that fair? – [Cameron] Yeah Okay so Kate, what do you think? ‘Cause you’re from Shanghai, from China, cameras everywhere, what is your take on this? – I think I still think that, uh, how to say– I’m going to focus on– intuitively it’s very scary, right? Everything you have that shows on surveillance and everything. But, actually surveillance happens like hundreds, hundreds years in different different ways. – Yeah – So I still think the technology or say the tool, is not the centrepiece. The centrepiece is that we be mindful of this kind of risks, like Cameron just said. And to do the right thing. – Okay so one twist on this question and I’m curious if anybody has any thoughts. What we haven’t talked about in this course yet is the introduction of these types of surveillance with deep fakes. So, some the newer technologies, they’re able to take anyone’s face and then put another person’s face on that video and it looks extremely realistic. They’re using this in Hollywood, really extensively now. And the technology’s getting cheaper and cheaper. So in– it’s likely that certain people could potentially be framed for a crime or use these types of external surveillance maybe, against someone that they wanted to harm in some way. Does that give you any additional pause? Or is this just something that’s you know, maybe, something we can’t do anything about? – I think it’s inevitable that it will happen in some form or another. It’s really up to governments and regulators to have tight controls on this sort of misuse of AI and so-forth. But, I have faith in governments around the world that they’ll be able to control it and ensure that the usage is for the appropriate reasons. – Yeah, so our next question is about how do we regulate some of these technologies, so we’re talking about artificial intelligence, David brought up this twist on that previous question about deep-fakes. And how do we regulate these? There’s different countries in the world, different jurisdictions – Diverse group here – A diverse group here, and, people have different opinions on this so, you know, will there ever come a time when perhaps we can have a uniform rule or regulation that will cover this globally? Is that something that we… practically that could be difficult, but is that something we aspire to do? So, maybe we start with that question. Because it’s quite important as a fundamental starting point in how we think about, perhaps regulating some of these technologies. Do any of you have any thoughts about that? – Yeah, I guess a lot of companies nowadays how they operate is not really restricted to a physical space. I think back in the days, you know, you think about the retail shop, then it’s confined into a physical space. Let’s say, if a shop is operating in New York then they follow New York law. And then if they’re operating in California, they follow California law. But let’s say, nowadays, everyone’s shopping in Amazon everyone’s shopping in different online websites and at least in the states, the bar-exam and then the law is kinda specific to each state and then if some sort of a crime some sort of incident that happens what jurisdiction’s law do we go by? And then if we go beyond that, one country people, we buy things from, I buy things from the UK, I buy things from Hong Kong then, who are the law-makers or the regulators to really regulate, or what law to follow or what guideline to follow and when there’s some inter-country incident happening which guideline is like, is the, I don’t know, like the golden-rule? – Mm, yeah – To dictate, so I guess– – It’s really complicated, yeah. – So do you think there should be a universal law at some point would that be the best way to deal with this? Or, can we rely on law as kind of the way we deal with this, or should we— are there other mechanisms perhaps, that we should think about? – Maybe we need a supreme AI overlord who can just determine all those things for us, maybe. – Yeah, I mean, I don’t know that’s a good question, I mean I don’t have a answer to this, I guess. – Do you think that it’s even feasible that there will be like global standards? – [Carl] So, yeah, for instance it should be very complicated, like even for nuclear weapons, we not all agree about it. So, how it’s gonna work for FinTech or AI it’s gonna be very complicated I guess – Yeah Now, there’s a lot of money involved though and so you will see that typically where money is involved and where cross-border commerce is involved those are the rules that are typically the most uniform. So I think the best example would be intellectual property and you have really large kinda multi-national organisations like the WTO, or other global bodies that kind of force companies into obeying certain rules. Do you think that the EU, and the US and China perhaps would be powerful enough at some point to kind of force everybody into adopting, like… ‘Cause you don’t have to convince everybody you just need to convince a couple core powers maybe that could be plausible? – It’s complicated nowadays, like when you see already the trade-war China versus US, who gonna take care? Who gonna take over? Who gonna decide? This is a good question. – Yeah – So… – Okay – Yeah – So I think the MiFID II example is quite interesting actually because as, Shannon was talking about The PII, so personal identifiable information and uh– because that actually ties directly in to this idea of artificial intelligence is the data that’s there. – Yeah – Because that qualifies PII, – Yeah – So that actually becomes quite an interesting question to think about. Because if it does then it will fall automatically under an existing regulation – Yeah – And if it doesn’t then, why not? Because, we could say our birth-date, address, national-identity number, you know, is personally identifiable information, but clearly our face should be a form of PII too, right? So it does raise some interesting questions. – Yeah and I do think it also gives me some hope that there is the potential for more uniform guidelines going forward. Because if you think of capital markets, right? There’s a lot of uniformity, because if you have a foreign, or overseas company that’s listed, say, on a USA stock-exchange then they have to adopt some of those rules contract rules are fairly uniform. Again, intellectual property. Product liability rules, even like development and production of products. So, basically if these countries do wanna do business back-and-forth and AI and information-technology and stuff is gonna be more cross-border in nature so I think it is actually very plausible that there will be some type of standards going forward. I guess that the question is who’s gonna be able to push those things and enforce them, right? Enforceability is always, I think, the biggest challenge when you’re dealing with cross-border things. Yeah – There’s different definitions of PII there’s no single definition of PII. – Right, currently, Yeah. – So, yeah I mean, one country can impose like thirty different fields that’s PII. – Yeah But another country may think that that’s not invasive, so who are–who’s the authority to say what’s the right level of control what are true PII, if there are such thing – Yeah and what are some bogus PII? – Yeah, and this actually is a good tangent to our third, kind of, topic. Because, one of the topics that students were commenting on was really about authority and power. And, specifically in terms of introducing AI in order to reduce human-bias but unfortunately the data that creates a lot of AI data-sets is already biassed, and so potentially could re-entrench existing bias. And so, I’m just curious from your perspective So, some of the things we’ve talked about include mortgages that are built off-of perhaps biassed, racist data then create AI systems where the computer then decides to give worse loan terms to a minority, let’s say. Kenny has a technology background, do you think that introducing artificial intelligence, machine-learning, and these kind of amoral, non-human actors is going to increase a more fair and transparent system, or is the data so tainted that it’s only gonna entrench these human biases and kind of, make even worse outcomes for some people. – From a technology point of view, as we know the AI model is actually coming from, you need to fit in a lot of data, okay, and those datas are based on the past history for example, from a bank, the mortgage-approval history transactions So if there is some bias in the very first place for example, the approval manager keeps rejecting mortgage applications because of the race, because of the background then those data will actually fit into the model and then the model, will have the bias. – [David And Teacher] Yeah – And then, but, the thing is, in the ethical point of view, what about the bank, or the government? Who’d like to change this kind of bias. Okay, so this is quite hard to say but okay, this is fine, we are used to do this this is what we expected, the result or, “hmm, this is not good, we need to change it” I guess it depends on the bank or the government, how they treat this to make it a fair judgement . – Yeah, okay great and as someone who’s again, in technology do you think that there are things that we can do now to hopefully ensure a more transparent and ethical system? – Of course I believe the government has to take the lead. To educate the banks, the organisations who use AI. They have to promote the fair use, or the ethical side of AI use and AI. And then, regulation also from different point of view. Discrimination, racism, different kind of the aspects They have to set very clear direction and regulations But it takes time. To do that. – What can we do? Let’s assume– I think, everybody that has thinking about these AI type of issues about data-bias and things like that I think we all understand that the basic issue in terms of if you get bad-data in, bad outcomes potentiate. But let’s say that happens though unintentionally. What is the process to address that? Okay, Carl, Yeah. – So, in fact, the good first thing is to be able to realise that thanks to AI, we are going things wrong in the wrong way. So we are making mistake, and to recognise mistakes. And then, we gonna be able to build on that. And then to recognise that with ethic, we gonna be able to build the right model not to have discrimination against, about gender, about racism or whatever. Think it’s a good point. And to bring new data to build a model which gonna make the world, let’s be uh… The world a better place (laughter) – There we go, a typical Silicon-Valley mantra okay, so kinda, concluding thoughts give us like, one or two lines of why maybe you’re excited about AI and its potential for the future. Do you wanna give us an idea? What is it about AI that kind of excites you from a FinTech standpoint? – From a FinTech standpoint, we think it’s actually making our life better in many ways. Like the payment system has been changing a lot over the past few years. But, to a certain extent I found that with the AI there are a lot of data-sharing issues, privacy issues that also arise at the same time. So there is always to be a balance between the AI usage as well as how it should be used in an ethical way. – So based on what we’ve discussed already, I think that one of the most powerful aspects of FinTech is the inclusivity it brings to people and especially to people who don’t have access to traditional financial services and that’s one of the biggest benefits. I come from an emerging market, Bangladesh and we talked about that at length in this course. How FinTech has brought so much access to financial services to the un-banked. And I think that’s one of the greatest potentials for all the risks such as bias and other things that marginalise people, there’s a huge positive to FinTech that actually brings people access to financial services to improve their quality of life. – Yeah it’s interesting so I’ve always thought of Bangladesh as a good example because, University of Hong Kong, we’ve got a great university, and yet no Nobel-prize laureates, and yet University of Dhaka, also a great university, but I think, two, and a lot of times the most beneficial emergent technologies happen in emerging markets. Because it’s really just out of necessity, right. So if you think of Muhammad Yunus he’s very explicit, he’s like “I wasn’t trying to invent Microfinance, I was just trying to serve the needs of a lot of people” And so I think some of it for me, again, I love the developing space as well and I think some of the most interesting utilizations of these technologies are really in that space because the potential impact is just astronomical it’s really cool, yeah. – And that’s also brilliant in the sense of, going back to the topic of eliminating bias in AI because, what, David you were talking about the bottom of the pyramid. They’re collecting data from the bottom of the pyramid. – Yeah – They’re not collecting only biassed data from rich people or privileged people so that data is going to be incredibly powerful towards contributing to less biassed AI – And, just to clarify I’m not sure if this is what you meant, but there has not been data collected on them in this way before, which means it’s completely untainted. Hopefully. If it’s done properly. So you’ve just got a clean slate, move-forward. Yeah, great. So any other kind of final comments that anybody has? Or excitement about AI? – I think not only FinTech, but the technology in general now drive a lot of attention, so there is a lot of discussion, just like what we do today, actually it’s a positive element to revisit to the ethic and the risk and also the current system. What is the issue with the current system? Why the current system, for example the financial system, didn’t serve some… – Yeah – People – Yeah – Where’s the gap? And then, challenging the business as usual actually can have a lot of potential to add a lot of value to the economy and also as a expense. – Yeah – Great, well we really appreciate our students for joining us in this roundup We really wish all of you in our online course we could actually meet with you in this kind of setting and capacity and kind of engage with these ideas Hopefully we’ll be able to continue to do this in some ways further moving on and we really look forward to connecting again after module five, which is really important as well as we revisit some of these questions in a broader structural sense – Talk to you next week – Thank you Module 5 A Decentralized Future 5.0 Module 5 Introduction
– Welcome to module five. In this module, we’re gonna talk about some of the key reasons that people are calling for FinTech innovation, and one of those main things is the decentralised status, the nature of this, thus democratising finance and allowing regular people to participate more fully and affordably in financial transactions through technologies like cryptocurrency, non-government-issued IDs, peer-to-peer lending, things of that nature. And so in this module we’re gonna address some really big questions, considering whether FinTech should lead to a decentralised, democratised system of finance, or whether existing institutions will adopt FinTech strategies to cement their existing hold on financial markets. During this module we’re gonna discuss these major themes, including the perceived desire for democratisation of goods and services. Is that good or bad? Will there be unintended consequences? So a lot of this is gonna be a continuation of things that we’ve talked about in other modules, including module two where we talked about blockchain. So we’ll be referencing back to those from time to time. And one of the key things that we’re gonna be emphasising is the sources of power. So, for example, will FinTech innovation lead to a decentralisation of power, or maybe a concentration of existing power sources like governments and banks? Or will a new concentration of power really be created in TechFins like Amazon or Tencent, which owns an app like WeChat, for example? Okay, we’re gonna start module five with a quick story. Now, I’ve been living and working within China for over a decade and travelled there many times. Last year, I had the opportunity to go to a small part of Western China that I’d never been before. So imagine kind of a rural desert landscape with kinda dust everywhere. In the morning I decided to not eat breakfast in the hotel and instead went out to get something on the street. Now, I noticed that there was a particular street vendor, an old woman, who you could tell she’s been doing this for a very long time and she was surrounded by people, so it was obvious that her food was quite popular. So, I went over there quite excitedly, I kinda watched what everybody was doing, and then when it came to be my turn I ordered some food. Thankfully I can speak Chinese and so that part of it was easy from a cultural standpoint. As I reached into my wallet and started to pull out some cash, I could immediately see the concern on her face when we both had the realisation that every other single person in that circle had paid using their phone and she did not have the ability to give me change with cash. So here I was, probably the person who had most access to the financial industry, and yet I was the one that was completely cut out of the transaction and out of this marketplace. Now, I found this experience, even though I couldn’t get my breakfast and I was a little upset about that, I found this experience super cool because this community in rural Western China had in a short amount of time moved almost completely away from cash. And indeed anyone that’s been to China recently knows that most communities are the same way, and the government, for those that have been there know that they’re actually very supportive of this change. So, we find this story is gonna be a good summary not only for the things that we’ve discussed in modules one through four but also it’s gonna be a nice transition to help us start asking some of the bigger questions that we’re gonna be analysing in modules five and six. So, before you move on to the next video, we just wanna ask you, think about the story, and from a FinTech standpoint what are some of the observations that you have, specially about how FinTech is impacting local people, average people in rural communities or advanced communities all over the world. Additional Readings He, D., Leckow, R., Haksar, V., Mancini-Griffoli, T., Jenkinson, N., Kashima, M., Khiaonarong, T., Rochon, C., Tourpe, H. (2017). Fintech and Financial Services: Initial Considerations. IMF Staff Discussion Note . Retrieved from https://www.imf.org/~/media/Files/Publications/SDN/2017/sdn1705.ashx Chuen, K., & Lee, D. (2017). Decentralization and Distributed Innovation: Fintech, Bitcoin and ICO’s. Stanford Asia-Pacific Innovation Conference . Retrieved from http://dx.doi.org/10.2139/ssrn.3107659 Magnuson, W. J., (2017). Regulating Fintech. Vanderbilt Law Review , Forthcoming; Texas A&M University School of Law Legal Studies Research Paper No. 17-55. Retrieved from https://ssrn.com/abstract=3027525 5.1.1 Is FinTech Leading to Inclusion or Exclusion?
– Okay, welcome back. Now although that was a quick story, we hope that you had a chance to kind of think about it because we believe a lot can be observed from it. Now let’s consider a few of these things. – Probably the most immediate observation is something that we’ve already discussed, the scale and penetration of FinTech innovation is faster and broader than anything we’ve seen before. Everyone on the street has a modern smartphone, and they had all adopted the technology into their daily routine. One reason this is true is because FinTech innovation can lead to efficiencies, which in turn can help a lot of people. And as discussed in module two, FinTech innovation can help cut out the middleman which saves costs. And for the street vendor using an app payment system means not needing to handle cash, which likely means to reduce risk of theft or for food worker is more sanitary. And in many cases, using cashless payments, it’s just faster and more efficient, leads better service. So basically, using a mobile payment system, help her business be more efficient and hopefully more profitable. – But FinTech innovations can also lead to exclusion. I probably had the greatest access to traditional finance, whether cash, credit and other loans, that anyone on that street thought they had, yet I was almost completely excluded from the marketplace, not even able to purchase breakfast. Like in this example, FinTech can lead to separation from financial markets, and therefore basic necessities. Although buying breakfast is a simple transaction, there are many layers of filtering in the story. For example, if you are required to pay with the phone, then guess what, you have to have a phone. And then you have to have the app WeChat and then an account on WeChat, and then money in or credits in that account. So one of the interesting challenges in the FinTech industry faces concerns access to these technologies. While many are hopeful that FinTech innovations will lead to better access to finance for the masses, others are concerned that it could also lead to increase exclusion from basic services. And this can be particularly true if governments decide to intentionally exclude some people from these platforms. – Now, going back to the example of the street vendor in China, we now ask a big question, the big question of the day. Will these innovations in FinTech bring the world closer together through multinational FinTech solution or will we become more and more isolated from each other? For example, credit cards have made it easy to make purchases around the world, no matter where I am. You know, I feel confident, I can make necessary payments using your credit card, though sometimes that means maybe paying high fees. – But on the other hand, David and I’ve been travelling to and teaching in China for many years. And while we love travelling there, from a FinTech perspective, every year, it seems more and more insular and disconnected from the rest of the world because of this simple paradox, the better their app ecosystem gets and the more people in China become interconnected via these apps, they’re simultaneously distancing themselves from the rest of the world. Now, this was made clear in the exercise with the Chinese street vendor, and this is happening in other countries too. Okay, so let me ask you another question. Do we think that FinTech is bringing the world together or pushing us further apart? 5.1.2 Money and Currency – Trust and Power
– Another interesting lesson from the street vendor example, was that value was being transferred, but was this money in a traditional sense? It wasn’t physical currency, that’s for sure. But there was currency backing that transaction, even if, distantly, on some cloud somewhere. This is very different than systems of payment that have existed for thousands of years, and will likely lead to the next evolution of not only how we pay, but also our perception of money. And actually, on that note, kind of an interesting vignette or an example, is even in North Korea, traditionally a country that we think is cut off from the global financial system, increasingly many people in North Korea now use cell phones. And frequently they have top up, like in other countries in the world, to buy credits. But now those credits can be transferred from one user of a cell phone to another as a form of payment. And so this is also a different concept of money. – Now this reminds me about one of my favourite things about living in Hong Kong, the Octopus Card. With this little card, I can pay for just about anything. Transportation, food, and even government services. And you can see it’s pretty worn here, I’ve used this card practically every day since my family moved here in 2007. And my children almost exclusively use their card to make purchases. – While some may say that the shift to making payments from phone apps or the Octopus Card is just the next natural iteration in making payments, much like the credit card and the handwritten cheque before that, and that people just get used to the changes in the way that, children in Hong Kong are already used to paying with an octopus card for when they go to the store. We do need to understand that such changes can actually have very significant implications for society. – Now for example, there are personal implications. As we know, these developments can make it easier to access and use money, having your credit card means you don’t have to carry around thousands of dollars in cash to make a purchase, for example. But on the other hand, studies seem to indicate that people spend more money when using a credit card than when using cash. And there’s reason to believe that people spend even more money when using an app or a web-based service than when using a credit card. It’s believed that the proximity issue that we discussed previously has a lot to do with this. This is pretty common sense, right, holding cash is proximate, and therefore forces us to think about the work that went into earning the money. Thus we naturally spend less when we’re holding cash. – But beyond these personal implications, as mentioned before, there are broader societal implications to consider. Now going back to David Bishop’s experience with the street vendor in China. This was a great example of disruption of the many formidable institutions that for millennia have controlled not only finance, but most other aspects of institutional power. Remember, there were no banks, whether physical or virtual, in this scenario. WeChat, called Weixin in China, isn’t a bank or even a financial institution in the traditional sense. That’s why refer to them as TechFins: large technology companies that because of their size, user base and overall scale, are starting to move into areas of commerce and services traditionally controlled by banks. – But not only were there no banks, there was also no physical currency. As I’m sure you’re aware, banknotes and coins can only be produced by government-approved organisations. For example, US dollars are printed by the Bureau of Engraving and Printing, and are issued by the Federal Reserve. And here in Hong Kong, our banknotes are printed by the Hong Kong Printing Limited, and issued by three banks. So here I have three $100 notes. This one issued by the Bank of China Hong Kong Limited. This by the Hong Kong and Shanghai Banking Corporation Limited, more commonly known as HSBC. And finally this one, issued by Standard Charter Bank. – Okay, so who cares? You’ve all probably held and seen foreign currency at some point in your life, and know how it can be a pain to exchange currency when going from country to country. You’ve also probably had the experience of shopping in another country, and trying to convert the cost of something from one currency to another. Now, to be honest, although I’ve lived in Hong Kong for nearly ten years, I still find myself frequently converting the price of a good into US dollars just to help me get a better sense of the cost or value of that particular item. Well the reason this matters is a combination of trust and power. As we outlined in Module One, the value of currency is really only sustained by a broad sense of communal trust. And by changing the nature of money, we are potentially altering the foundations of trust, which can have broad implications across society. – But also, this is about power. The ability to print your own currency is a significant source of power. Maybe you’ve seen a movie where criminals try to steal or create ink plates so that they can print their own money, and to be honest, when I was a kid, that was a dream of mine. As another example, there’s been a lotta discussion about the power the United States has in the world because of the outsize influence of the US dollar. Which is widely considered as the world’s reserve currency. As an example, right here in Hong Kong our money is pegged to the US dollar. Meaning the value of the Hong Kong dollar rises and falls along with the US dollar. So think about how much power that involves. It means that some folks in the US that maybe have never even been to Hong Kong, can change the value of our currency here, which in turn can affect the cost of everyday goods, housing prices, the value of your personal savings, company profitability and many many other things. Additional Readings 5.1.3 Will Governments Accept New Currencies?
– Okay, so returning to street vendor example again. On that street in China there’s no currency involved, now is that the goal of FinTech Innovation to eliminate all physical currency? Or perhaps even government backed currency altogether? If the latter, do you think that governments will just roll over and allow their control of their currency to be taken away? – Now in the China street vendor example although people were paying with the Web App the transactions were still backed by government issued currency. What happens when this is not the case? Cryptocurrencies like Bitcoin are usually not government issued, and even government backed currency. Will governments be willing to allow the use of theses cryptocurrencies within their borders? And possibly even adopt one of these currencies as one of their own? – Some nations have already announced that they would move to cryptocurrency in some form. For example Venezuela launched a Petro cryptocurrency to help the country among international economic sanctions. But the Marshall Islands is the first country to launch a legal tender cryptocurrency meaning their new currency will be recognised as legal tender, real money. And will have equal status as their current currency, you’ve guessed it which is the US dollar. We told you, the US dollar, lots of power. – Even the name of the new Marshallese currency it’s called ‘Sovereign’ after all, is a statement about power. The name was chosen to emphasise the sovereignty of the country which has a history of colonisation, nuclear testing and resulting poverty. When discussing the controversial cryptocurrency the president said “This is a historic moment for our people, finally issuing and using our own currency, alongside the US Dollar. It is another step manifesting our national liberty”. – This switch will have a lot of implications, and could be the start of a new age for money and finance. Interestingly, the International Monetary Fund, the IMF warned the Marshall Islands government about issuing such a cryptocurrency. They were concerned that the currency could be manipulated by crime syndicates and fraudulent business practises. The types of activities that have often been tied to cryptocurrencies. And also that foreign government could cut financial aid to the Marshall Islands if they broke from the US Dollar as their own e-currency. Okay so let’s stop and consider some important questions. Do you think that cryptocurrency and other FinTech solutions will ever be largely adopted by banks and governments? Or, will they lead to de-centralised future where banks and governments are less influential in these areas? Will TechFins take over the finance industry? And, will other countries adopt cryptocurrency as a legal tender? Additional Readings 5.1.4 Will FinTech Take Control of Financial System?
– Okay so in summary, FinTech is leading to some really amazing efficiencies that can help a lot of people bypass middlemen and save money, but it can also lead to exclusion within countries and exacerbate divides between countries. This is largely because these innovations are completely changing the very concept of money which is leading to questions about trust, proximity and especially power. – So where does this leave us? Will banks allow their power to be receded by decentralised cryptocurrencies, peer-to-peer lending networks and other FinTech innovations? Or will they use these developments to further consolidate their power over financial products? – Will governments stand idly by while their power over currency, personal identification, and other traditional government-based power is taken away by FinTech startups? – And how will both banks and governments react to the rise of TechFins, who seem to be growing daily, increasing in both power and profits as they expand further and further into services traditionally handled by other institutions? In this module we will explore some of these questions. But first, let’s talk about what we mean when we say decentralised, or democratised. Additional Readings Zetzsche, D. A., Buckley, R. P., Arner, D. W., & Barberis, J. N. (2017), From FinTech to TechFin: The Regulatory Challenges of Data-Driven Finance. University of Hong Kong Faculty of Law Research Paper No. 2017/007 . Retrieved from or http://dx.doi.org/10.2139/ssrn.2959925 Marous, J. (2018). The Future of Banking: Fintech or Techfin? Forbes . Retrieved from https://www.forbes.com/sites/jimmarous/2018/08/27/future-of-banking-fintech-or-techfin-technology/#5bbdbbd15f2d Ren, D. (2018). Tightening Regulations Make FinTechs Easy Takeover Targets for Banks Stepping Up Digitalisation Drive. SCMP . Retrieved from https://www.scmp.com/business/companies/article/2159718/tightening-regulations-make-fintechs-easy-takeover-targets-banks 5.2.1 Is FinTech an Evolution or a Revolution?
– Now, if you pay attention to the FinTech space chances are that you’ve heard words decentralised and democratised a lot. FinTech experts of all varieties love throwing around these terms but what do they really mean in a FinTech context? – Well, many see FinTech development as a natural process of technical advancement, much like the locomotive surpassing the stagecoach. Others see FinTech as a direct result of, and possibly even fighting against, traditional centres of financial power. Some believe the power to control banking, currency, and even our own identity, has been held by the elite few and that the control of finance has not been transparent nor democratic. – Whether as a result of the natural evolution of technology or as a direct backlash against existing power structures, the reality is that FinTech is seen by many as having the potential to completely change, and possibly even destroy, existing financial power structures. And it’s important to understand both these motivations and possible outcomes. So do you think that FinTech advancements are a natural evolution of technology or a direct result from the mistrust of institutional power? Additional Readings 5.2.2 Have We Lost Trust in Financial Institutions?
– As we have discussed many times, finance is largely built on trust and in the past, institutions likes banks and governments have served as the guarantors of trust in the financial world. But whether as a cause or effect, trust in institutions has diminished significantly in many countries over the past decade. – Now, probably the best recent example of a cause for distrust in the financial world is the global financial crisis, including all the major financial scandals that were exposed as a result of the crisis. Now, for most of us, we had to stand by and powerlessly watch as the global financial system nearly collapsed. We had a daily reminder of how flippantly certain members of the global financial community pursued profits at the expense of their customers and how government regulators were not really sufficiently protecting us. – Millions of people around the world lost their homes, their savings and essentially their futures. In the US alone, it is estimated that American households lost approximately $20 trillion in wealth as a result of the financial crisis. And as a result, it is no surprise that many of these people began to de-trust the very institutions that were meant to protect and serve them. – Now, as a personal example, my wife and I bought our first house the year I graduated law school which was around 2005. Obviously we didn’t know that we were buying at pretty much the worst time possible with the financial crisis decimating the real estate market only two years after we purchased our home. Now, when the market crashed, the value of our home dropped by over 30% and it took a really long time to recover. Well, my family recently sold our home. It was about 13 years after we purchased it. The selling price? Exactly the same amount that we purchased it for back in 2005. So, while we’re grateful that we didn’t really lose any money. There was a lost decade were many people around the world lost most of their net worth and have struggled to recover ever since. – Let’s be honest, many large financial institutions have not done much since the financial crisis to reduce our concerns. As noted earlier in this course, banks such as Wells Fargo and HSBC have had multiple high profile scandals that have gutted their customers’ trust. And it seems, every week there’s some sort of scandal that comes out that involves the financial institutions. Additional Readings 5.2.3 Can We Trust TechFins?
– Now during the past decade, as people shunned banks and traditional holders of power, they turned instead to the so-called TechFins, digital platforms like Facebook, Amazon, Google, and Tencent, that provide eCommerce, peer-to-peer lending, communications, and increasingly serve as the keepers of our digital identity. – The rise of the gig economy of social media sites has meant customers now have more control over nearly every consumer service, whether hailing a taxi, deciding where to stay while on holiday, and even how to pay their bills. These large digital platforms have transcended many traditional financial institutions, not only in terms of customer engagement, but also in terms of trust. – But after more than a decade of explosive growth, the TechFins are themselves now caught up in many scandals and are seeing their own trustworthiness questioned. And some contend that these companies are now so large and powerful that they’re actually influencing government policy and even national elections. So got a question for you. Do you trust the TechFins like Amazon and Tencent more than you trust banks? Do you think that the TechFins should be regulated like a utility? And what do you think about companies like Facebook entering the crypto payment space? Additional Readings 5.2.4 Should TechFins Be Regulated Like Utility?
Okay, so a lot of these TechFins and stuff they’re really getting just gigantic and a lot of them, you know, people have started to kind of distrust them a little bit. So one of the major conversations around this space now is, you know, should these TechFins, should they be regulated like a utility, right? – Yeah. So, first maybe, I guess describe what that means to be regulated like a utility, but then do you think that should, you know, should that happen? – Yeah, so I mean I think this is a timely question to be honest and that will continue to be a big question as people think about, we have increasingly an agglomeration or a collection of power that’s occurring and now- – They say the Big Five right? – That’s right, yeah. – As opposed to money that is collected by a slight few financially, now it’s really data that’s being collected and now all, many of us in the world who use different services to provide these large technology companies are embedded in their ecosystems, so it’s difficult to potentially extract yourself from that if you tried. And so one of the exercises we did in one of my classes recently, was for one of these large technology companies that everybody knows their name of, we listed out what services that they had. And this class was at 9:30 in the morning. And I said, “From the time you woke up this morning to the time you came to class, how many of you used the services on the board?” And we had a list of maybe 15 types of services, from this one company, and most of the class had used at least 80% of the services, just in that, whatever, from the time they woke up and the time they got to class that morning. And then I asked them the question, I said, “Imagine trying to take yourself out of that ecosystem, i.e. not use any of these products. Do you think you could?” And, the– – You’ll see articles about that, some journalists on Medium or something say, “Oooh, I stayed away – Or like, I’ve tried this for five days And everybody says, this was, um, it may be possible but there’s so much transaction friction from doing that, you know, from moving from that to another service, that nobody would do it. And that’s a challenge, right, because that generates so much data about you individually and us collectively which is what is fuelling the continued growth. – Yeah, one of the things that I think is super-cool to think about is that within our lifetime, in the lifetime of most people, certainly, you know, you that are out there watching this, within our lifetimes there have been no new utilities. So when we talk about utilities, for those who are not familiar with this, this would be services and goods that are so essential to society, the government actually has special regulations about them, and in fact, they often have like caps on the amount of profit or revenue they can actually earn. So for example, the top utilities would be electricity, water, sewage, sanitation collection, anything else? – No, I think those are the main ones – That’s pretty much it, right? – And in a lot of places in the world some of those are actually government-controlled entities. Or partially controlled government entities – Yeah, yeah, or they’ll give them like a legal monopoly or something, right. Okay so now imagine this, so what we’re saying when we say regulated as a utility, what we’re saying is has Google become so critical to society, in terms of its search engine, do we use it so much, is it so critical, that it should actually be captured as a public good just like electricity, right. Now, you have to understand, electricity was a technical innovation as well, right, as was transportation, sanitation, water, right, and at some point the technical innovation of pumping water into your home became a public good and therefore a utility. But it was started by innovators, started by private companies, right. So now the question is, have we advanced within this, social media? Is Facebook so important to everyday society that all of us in society have a stake in it? That’s really the question that we’re asking. – And so, I think to that point, you know, if we think about Google or Facebook, if we think about a single product and try to make the argument of regulation as a utility around a single product, probably not. But if you think of it collectively, so if we use Facebook as an example, you know, Facebook as a social media platform has a lot of users, but then if you think about its influence beyond that like in instant messaging, where there’s the Messenger on Facebook, there’s WhatsApp which they own, there’s Instagram which is another part of Facebook. Those are different messaging platforms that reaches, that has a far greater reach. And then, you know, one of the things apparently that Facebook is debating at the moment is if they’re going to have their own payment system. – Right. – And so if they were to roll that out across all their users, across all these platforms, then immediately they would become one of the largest financial players in the world. – Immediately. – Immediately. And so then at that point, then to the question, should they be regulated like a utility, should they be regulated like a financial institution? I mean, it starts raising a lot of interesting questions. – Well, and to that point, just very quickly, because again for those that don’t know the legal or even economic history of these things, you need to understand when we say regulated as a utility, that probably means breaking up these companies. Right. So for example, railroad companies. Telephone companies. Electric utilities. What it means is they’re so big, so powerful and the service they provide is so essential to all of society that the government actually goes in and breaks them apart into different companies so that all of us, there’s more competition in the marketplace, and all of us have better access to that core service. So, in this instance they could go in and say Messenger is pulled out of Facebook, Instagram is pulled out of Facebook, um, you know, WhatsApp is pulled out of Facebook, and they cannot have any direct contact with each other as a financial institute. – And I would say, and that’s right, if we look at the United States probably the most recent version of that that’s not ancient history is probably telephones. – Yeah, telephones. They call them the baby bells. Because they took this massive bell company and they broke them into – They had regional bells – Exactly, yeah. And then they had to compete with each other. – And so, yeah, it would be interesting. I mean right now in the United States in 2019 in the lead-up to the next Presidential Election there’s a candidate right now – Yeah, who’s already saying this – Who’s already advocating for a break-up of these based on you know, anti-competition kind of rationale. It would be interesting to see if, how this develops, because I think that idea is tapping into a more pervasive kind of feeling that increasingly people are having about the ideas of privacy, identity, data, who should have what information, how can I control my identity? All of these really fundamental questions that users of technology, that I think people are having. – And there’s one last thing before we close on this. What is also means to be regulated as a utility as I mentioned was, this often means like caps on profit, right. So even if you don’t break up Google for example because maybe you can’t break up Google, if they’re regulated as a utility that might mean they’re only able to earn like 8% profit a year or whatever, right. Like a lot of electricity companies, these utilities, they’re actually capped, so here in Hong Kong the major power producers are actually capped so they can only make a certain amount of profit every single year. It’s not like a typical competitive environment because the government, you know, you can only have so many electrical companies, basically, so what they say is you can’t charge customers over a certain amount. So, what do you think? Should these big, you know, multi national or even national FinTech and TechFin companies be regulated as a utility, or should we allow them to just compete on the open market? Additional Readings 5.2.5 Five D’s of FinTech
– So, where does that leave us? Well, once you tie everything together, it has led many people to an increased desire for transparency and control in their lives, including goods and services. People are shifting to a machine-driven trust that doesn’t require third party, whether a bank, government, or other potential intermediary. Some have even suggested that the rise of blockchain and cryptocurrency is closely related to people’s faith or lack-there-of in government-backed institutions. – This desire for more trust, transparency, and control is summarised by what one commentator described as the, quote, five Ds of FinTech. – The first D is democratisation. From the beginning, many traditional elements of finance have only been available to the wealthy and elite. The democratisation of finance entails bringing such resources to the masses. For example, wealth management, life insurance, and the ability to borrow and lend money are some of the many services that in the past were largely unavailable to the poor. But many FinTech innovators are bringing these services to people throughout the developing world. (cash register clinks) – The second D is disaggregation. Disaggregation is a big word that basically means to divide or to break apart. Now, in the past, financial service have largely been concentrated by only a handful of entities. Leaders in FinTech have started to give customers better and quicker access to information, thus reducing cost and increasing options. An example of this would be a website like LendingTree that shows many different mortgage options so that customers can shop between different lenders. – The third D is disintermediation. Again, this is a really big word, but basically it means cutting out the middle man. We’ve already discussed this quite along this course, but basically many FinTech solutions hope to save customers money and time by cutting out the middle man. Peer-to-peer lending services and blockchain-based remittance services are examples of this. – Okay, so, the fourth D is decentralisation. And this focuses around the idea that a lot of FinTech solutions are not controlled by a single, central entity but instead give control and power to many players simultaneously. So, blockchain-based products are probably the best example of this. As discussed, blockchain is a distributed ledger system, meaning the data is synchronised and shared across multiple locations simultaneously. – The fifth and last D is de-biasing. Many FinTech innovators hope to remove historical bias from the financial system and to create a levelled playing field with transparent information. The belief is that financial institutions have been too exclusive and closed off, and they’re hoping that the use of technology can make finance easier to understand and accessible. – Okay, now, in summary, whether because this is the natural evolution of technology or the result of a backlash against banks and other large institutions, FinTech innovators are really promoting solutions that provide more transparency, more control, and more equality for consumers around the world. They want to enhance financial inclusion so that products that were previously only available to the rich and powerful can be held by everyone. – But what do you think? Is this a realistic goal? Do you think that the institutions that traditionally hold power will really allow this to occur? Or is this future inevitable? Before we can really answer those questions, it’s worth taking some time to understand who traditionally held power, specifically let’s consider who controls banking, currency and our very identities. Additional Readings 5.3.1 Who Controls Banking?
– Microsoft founder Bill Gates famously once said, “banks are dinosaurs, they can be bypassed,” and that “we need banking but not banks.” Take a moment and consider those statements. Do you agree? Mr. Gates made those comments in light of the vast advancements in technology during the 1990s, knowing that banks needed to adopt new technologies or find themselves left behind. Does that sound familiar? Would it surprise you to hear that Mr. Gates made these statements back in 1994, nearly 25 years ago? – Now in some ways, Gates was wrong. Between 1994 and 2018, bank operating revenue tripled, earnings quadrupled and equity capital has quintupled. Now for those who are not in finance, this simply means that banks have gotten bigger and bigger and generally more profitable. But some of his statements and the concerns they raised among bankers have clearly begun to happen if not already being a reality. For example, an American banker named William Randle warned that, “a failure on the part of banks to recognise this threat to the banking industry from technology will result in the loss of control of the payment system, as well as the loss of control in core customer relationships.” – Sound familiar? Some estimate that because of the TechFins and large digital platforms, incumbent banks may lose one third to half of all payment volumes by 2025. They point to the fact that these established tech giants already have large user bases, low online acquisition costs and big data customer insights, and they’re increasingly getting internet banking licences. Some of these TechFins may have preexisting scale and client reach that is larger than traditional financial institutions, think Alibaba, Facebook and Tencent. And in some parts of the developing world companies like M-Pesa are allowing people to largely bypass traditional banks altogether. – So where does this leave the banks and what does it entail for the financial system and banking as a whole? And where does it leave us as the consumers? Additional Readings 5.3.2 Will Banks Give Up Their Power?
Okay, so Bill Gates said that these banks are dinosaurs, and they’re gonna supplanted. So what do you think Dave? Are they gonna be supplanted by FinTech companies? – I think it’s interesting, because he said that people don’t need banks to do banking, and I think ultimately, that is kind of where the shift is going. I can’t think of anybody who enjoys lining up at a brick and mortar bank, and just waiting in line. And even the employees that work there, that’s not a pleasant experience, and so any kind of friction that exists, I think one approach that entrepreneurs take, is trying to minimise that kind of friction, whatever it may be. And so I think, going back to Bill Gates’ quote as well, is that really interesting information about how banks have become bigger and more profitable, and larger in scale. Part of that is, I think a continuation of just, the wave of efficiencies that we’ve seen throughout history, right? As technology evolves, be it from steam power or electricity, or whatever it is, then there is eventually some pressure that’s put on existing cost structures, and things in theory, should become more efficient. And I think as new technology comes into play, I think many of banks are, one reason they are absorbing, or trying to absorb some of those technologies to reduce cost. Part of that comes from a cost-efficiency, laying off people. Part of that is trying to find technological solutions to allow them to lay off more people. And I think that’s led to some of the scale and profitability that I think that you were initially referring to. – Which does get to some of our points about, the idea that one of the ways that companies are becoming more efficient, either through FinTech, or just technology generally, is by essentially letting go of people, right? Which is an ethical question, an overarching question for this course in and of itself. And just to highlight the point, the research where they did show that the growth of the industry, the financial industry from 1994 to 2018, did show that a lot of that financial growth was, I guess, helped by the fact that there are actually significantly fewer actual bank entities, right? Meaning, like the brick and mortar shops and things, right? – I think a lot of this technological evolution, there’ll be a lot of stuff we can’t see as day to day consumers. But the point that we’ll all be able to see is, there’ll probably be less and less physical presences. So going back to this point about all the great experiences you had lining up at a traditional bank, that experience will be, hopefully less and less. – And they used FinTech to actually push people away from that experience. So a lot of the accounts that you’ll see, like I have accounts right now, that will actually charge you, if you go into the bank. Whereas they’ll let you do online banking, in unlimited amount, for free. Generally speaking, right? They’ll let you go into the bank a certain number of times before they charge you. And so over the past decade, they’ve actually affirmatively been pushing people away from a business model perspective. I think the first example that I had from this, the first non brick and mortar bank that I worked with was with something called ING Direct, where they had this Orange account, and there was no physical location anywhere in the world. At the time, this was quite revolutionary. They were able to keep a very, very low cost banking, and therefore give you better interest rates. And I remember, even before I had moved out of the US, I was banking through this kind of external system. But it leads me to another question then. – But what’s interesting about that is that, ING Direct was tied to a traditional financial company. – Correct, that’s exactly my point, right? So the point that I’m now asking is, okay, so for those of you that are kind of viewing FinTech, blockchain, Bitcoin, whatever, as this decentralising, democratising institution, or the fact that FinTech is actually gonna pull away at the sources, the traditional sources of power, my question for you then, is that kind of a false vision of the future, and are we instead just seeing a retrenching of existing power bases, especially within the financial industry? And as a quick example, so I think a week or two ago, JP Morgan recently announced their own kind of foray into cryptocurrency, right? There’s so many examples of this, obviously online banking was not started by non-banks. Those were banks that did it. So what do you think? Yeah, I don’t think those paths are necessarily mutually exclusive, to be honest. I think, banks in their current form, have existing customer bases. They obviously have technology they can scale. A lot of the investments that many large commercial and international kind of investment banks are making, are into enhancing technology, right? Their ability to do certain things. So that could either be in-house development, or through acquisitions. So I think that will continue, and as an entrepreneur, you may have this vision, right, about, oh we’re gonna produce something. But ultimately, if someone comes knocking like, hey we want to acquire this technology to bolt it on to something we’re doing. That can be a very attractive proposition for an entrepreneur, right? And so I think a lot of the technologies, will find themselves into established players, and so part of that will be established players, being moulded, so to speak, by this new influx of technology. But I think to some of the stuff we discussed earlier, there’s gonna be a whole host of non-traditional financial players or institutions, who also kind of get into the game. And perhaps one example of that is PayPal, right? – Sure. – A lot of people look at PayPal as kind of the first big domino in the sense of trying to get into this payment space, that had been particularly dominated by more mature players, not so savvy with technology, really didn’t have necessarily a world view of how they wanted to institute or employ technology in the work they were doing. And so, I think we’ll see a bit of that, or a combination of both. I don’t think those are mutually exclusive paths. But increasing, I do think we’ll see, again, non-financial players, who, based on their user bases, roll out financial institutions themselves. So another interesting example from our part in Asia, is one of the large social messaging platforms in Asia, is a Korean application called KakaoTalk. And about a year or two ago, they rolled out Kakao Bank, which is effectively online. There’s a few physical presences, but almost overwhelmingly, it’s online only. And as a result, their cost structure’s vastly different than a traditional bank. And so in this particular situation for Kakao Bank, their online user acquisition, kind of the growth, was rapid. And they were able to offer much more competitive fees, rates, and other products to customers that otherwise, would’ve had to only go to a brick and mortar bank. – Yeah, and so we’re seeing that in other, and certain a lot of the Chinese apps as well. Not so much on the cost side, but from the efficiency standpoint. Additional Readings 5.3.3 Should FinTech Replace Banks?
Start of transcript. Skip to the end. Okay so let’s look then, into the future. The last question I have is, specifically on the ethic side of things, right so if we were, I want you, for those of you that are out there, I want you to think of a world, without, this, kinda traditional banking system. And there may be some of you, are kind of salivating at the idea, that sounds like a great thing, because again, you hate waiting in line. But, is it a good thing, for us to be, so far removed from the banking system? And so just to give you a bit of context to that, there are certain institutions within society, including the medical field, including the legal field, where we intentionally require them, for example, to not incorporate, to ensure they are a strong part of these communal systems. And I’m just curious what you think, I mean if the banking system is so decentralised, that you don’t have that corner bank, that you don’t have someone you can talk to, to go borrow money from to buy a home. Right we’ve already talked about AI, being introduced in this. If there’s no human component, do we worry about, ethically speaking, do we worry about, the ability for people to actually access finance, not just in terms of accessing finance, but actually like, understanding it, and really, like I guess, being in control of their financial future? – Yeah, so I think that’s a really compelling point, in the sense that, I think there are a lot of things, that would go into play here. One is this kinda education component. There’s a big push increasing in many places in the world, to ensure that, from a young age, children understand finance. Not in the sense of like, Oh let’s go all be Warren Buffet and invest in stocks, but like, understanding the purpose of interest, and credit. And why certain times, sometimes that can be good, sometimes that can not be good. And I think part of that is just, basic, financial education is important. Now building on that, to your question. You know, this is something we will touch on, throughout the course, but, kinda the issue of access. There will be surely be populations, that will be displaced if they’re not accounted for, – Yeah. – in news right. So either because they can’t, access technology yet, or even though their nation or society, generally accesses the technology for, because of age or, other reasons that they can’t. They’re not as familiar with it, they don’t feel comfortable with that. They’ll be, increasingly shut out. So, that process by it’s very definition, will need to be transitional, in order to ensure, kinda the widest amount of fairness as possible. But, it is that the ultimate kind of, end goal of all of this. I know, to be honest I don’t know. Like, what is the end state? What is that, kind of the normative, end state of all of this? Where do we really wanna, I don’t think anybody knows. Right, people just feel like, “Hey, we’ve had friction in this, let’s just make it smoother. Is there a way we could do it faster?” I mean this is effectively what the motivation is. I don’t think a lot of people are thinking about, what are the next order of implications, if we do this from a cyber perspective. – I do think that what you’re going to see is, you’re going to start seeing a lot of what we call: “pro bono” or “low bono”, meaning like, free services. Where banks or other nonprofits and community groups, are going to form credit unions, they’re going to form financial institutions, in order to ensure banking for the poor. You’re already seeing that, here in Hong Kong. We’ve talked about foreign domestic workers, here in Hong Kong, how 10% of Hong Kong’s working population, are migrant workers, many of them are underbanked, and there are multiple sources, of I guess, traditional banking. Especially from a credit union standpoint, where they try to ensure, they can get access to these things. Still the penetration is very low, but I think you’re going to start to see, as the traditional industry gets, more and more removed from people on the ground, especially those that are more poor and disadvantaged, I think you’re going to start seeing more, and more, kind of, service providers, on a charitable basis coming. – And I think the challenge, with kinda a lot of the stuff we’re discussing, comes in two ways, one is: as technology evolves… How confident are we that kind of these, stakeholders that are serving, people who are, will be underbanked, or not as economically advantaged. Will they be able to scale up, in terms of the technology that they need? You know and a lot of the discussions that I have, talking to banks, they have no difficulty, hiring for most roles, they always have difficulty, hiring for, in the past was considered IT, but now, a lot of banks are actually calling them, they don’t call them “IT” anymore, they call them like, “engineering”, or “technologist”, or you know, to make it, have it a different veneer right, a different polish. – Well that’s programming, right? You’ve got the privacy side, you’ve got the security side, right, you’ve got, they’re building out new tools, including the blockchain type stuff, right. There’s a lot of things. – Yeah, and so what they found is, a lot of these banks, particularly in the US, and Europe, they’re competing directly for the same talent, that maybe Google, or Facebook is competing for. And so this is the challenge that they face, so to that– – Well especially because those graduates don’t think of going to J.P. Morgan, they don’t think of going to HSBC– – Immediately! – They’re thinking okay yeah, there’s no like Silicon Valley right, they’re thinking: Google, they’re going to Facebook, they’re going to Netflix. – That’s one question, I think that is, that’s to be determined. And the second I think which is more broad, which I think pervades many aspects of this course, are the idea of regulation. So historically we’ve had, financial regulators, and we’ve had regulators over, certain aspects of communication industry. Now, is that going to change as we have different players that, start occupying the finance base. – But we know it will. The question is, how yeah. – Yeah and it’s, so if you had a TechFin, that came from non-financial industry, come into to do, something, traditional banking, maybe non brick-and-mortar, just online. Will they be doubly regulated? I mean how will that work? Particularly when the regulatory regimes, are probably not consistent, as well as, global in nature. And so then you kind of get a lot of interesting, and kind of frustrating friction that will occur, and I think that will take a long to, unravel, as people try to figure that out. – Yeah, so I think these are a lot of the questions, that society is going to have to kinda entertain, and hopefully answer, over time. One of the things that I often think about is, should we even consider banks as traditional banks, even now, or have they really already morphed into essentially technology firms? For a lot of the banks that I know, and a lot of the people that I know work in them, they actually have as many, or more IT professionals, as they do actual bankers, and finance professionals. And so I think, as we continue to move forward, some of the questions that we need to consider, are, how we as society want to engage, on a personal level with these institutions, and I guess on a more simple question, you know, do you hate going to your bank as much as we do? Do you see this as a positive thing, or a negative thing? We’re curious. Additional Readings 5.4.1 Who Controls Currency?
– We already discussed control of currency quite a lot earlier in this module, so we won’t dwell on it for too long here. But it’s worth a reminder that for millennia governments have exclusively controlled their currency. – But as we’ve already noted, government control of currency in the traditional sense is loosening somewhat because of FinTech innovations. For example, governments usually don’t have control over crypto-currencies and have been frantically enacting legislation to help them control the use of these new products. – The reality is that physical currency has a lot of drawbacks. The use of physical currency is difficult. It can be heavy, it’s expensive, at times, to produce and it can even lead to increased crime. For example, in 2016 the Wall Street Journal showed that it cost about 1.15 cents to produce one penny. And nickels, which are worth five cents, cost seven cents to make. How crazy is that? And it’s not just a production cost. It is estimated that in the US alone over 60 million dollars worth of coins are lost out of circulation every year. Maybe some of that money is hidden in the cushions of your couch. – Now as an ironic twist, perhaps the best portrayal of the inefficiencies of physical money can be seen by the people who use it for the purpose of being inefficient. You may have come across stories about people who bring trucks or wheelbarrows full of coins to settle fines as a way to show their frustration or to punish the recipient of the payment, knowing that it will take them hours to count every single coin. – With some of these challenges in mind, perhaps there are some benefits in moving towards a more cashless society. In a similar vein, maybe our definition of money should evolve by removing currency from the exclusive control of national governments. – But will a cashless society block some people from the financial sector completely? As you may recall from the China street vendor example in the beginning of this module, since I didn’t have the WeChat payment app on my phone, I was unable to simply purchase breakfast. And recent stories have shown that some disadvantaged populations in China, especially some of the elderly in China, have been largely blocked from participating in their vast digital economy. And those that are kept away from these elements of the digital economy might even be further distanced when crypto-currencies begin to challenge fiat currencies around the world. – With these various developments, trends and changes occurring around the world today, what kind of world are we heading towards? Will governments largely retain control of currencies or will block chain and crypto currencies not backed by government one day become the new norm? What unintended consequences may arise from the push towards a completely digital financial landscape? Additional Readings 5.4.2 Should We Have Physical Currency?
So one question to consider is a really fundamental one about currency. Should we have it? And if so, who should control it? So what do you think Dave? – Yeah I mean it’s a great question. I wouldn’t have thought this, years ago, that it’d be a relevant question to be honest but now it’s a legitimate question and certain governments are really asking this. You’ve got developed countries, I think it was Sweden, announced that they wanted to go to a cashless society, now they’re kind of pulling back a little bit and realising they didn’t think it through completely. You’ve got other places, we talked about Marshall Islands maybe having an official currency, would be crypto or crypto-based. The reality is, it’s probably a premature question. The other reality is, no matter how you and I answer this, sovereign governments are gonna be deciding the answer to this question. So I think maybe the more relevant question to me is do I think governments will give up a physical fiat currency? I think the answer is no. I certainly thing the US government is definitely not going to give that up, it provides them so much influence and so much power around the world. Other governments like say China, they use currency as a way to kind of control their state-run economy which does make sense. And so I think this is probably not going to be a question that is gonna be realistically considered in most countries any time soon and I would say it’s quite the opposite in fact, I would say they’re probably going to be very opposed to the idea. And you have I think, the World Bank when Marshall Islands was proposing the cryptocurrency as a potential currency, they specifically warned them against that saying it would drive down their donations and other things because other countries wouldn’t like that. – I think you’re right. I think the idea of influence which you mentioned, particularly for large economic countries that have large economic influence, say like the Unites States, a large part of that is driven by the fact that the US Dollar is a widely accepted currency. So if we think about the ability for Unites States’ laws and regulations to permeate the rest of the world and banks and other parts of the world having to follow these regulations, a large part of that is the fact that they transact in US Dollars which allows the United States to extend the arm of regulation beyond its borders. So for sure, they don’t wanna give that up and then collectively something like the European Union, the fact that they’re using Euros as opposed to individual state currencies now I think provides them a little bit more, as a block, some influence versus when they were disparate countries using their own currencies. And so the idea of influence I think is right. I think ultimately what we’ll probably see, outside of these instances, like the Marshall Islands, which are kind of small and somewhat isolated economies, is I definitely think there’ll be a move towards kind of hybrid type of systems over time. Probably in our lifetime. – Cashless payment systems. – Probably in our time. Most of that will likely be backed, I think, by some level of fiat currency. And I think the reason we probably don’t wanna go to a completely, a system that is completely devoid of any currencies, inclusion issues which we’ve talked about. Inevitably if you do that, then you will exclude people who for whatever reasons are not able to access the digital technology. – Or don’t want to. – Or don’t want to. – Some people wanna stay off the grid. – That’s a challenge. – Just one thing to add again for those that are maybe not as commonly in a developing country, actually that physical currency is actually one of the ways that developing countries control the use and spending of their currency. So they don’t let it leave in large quantities. They don’t allow the free conversion of their currency because they don’t, for example, let’s say someone is corrupt and steals a billion dollars or they make several billion dollars in a real estate deal and then they immediately pull that out, they wanna make sure they can control the use of that currency. So having a physical currency is actually a way that, say a country like China, has kind of controlled their development. – And managed money supply and things like that, absolutely. Additional Readings 5.5.1 Who Controls Identity?
– Okay, so, if somebody asks you to identify yourself, how would you do it? Imagine that you’re driving a little too quickly and you got pulled over by the police, or you’re travelling from one country to another and crossing through immigration, how do you prove that you are in fact you? – So, I’m not sure if you were following me this weekend, (David B. laughs) ’cause this is eerily reminiscent of something that happened this weekend. – Were you travelling? – I was driving this weekend and got pulled over by the police and got this ticket which is in my pocket because I have to go pay this ticket later today. – Did they ask you for ID? – They did ask me for ID. I produced the ID that we have in Hong Kong and issued my driver’s licence for them to confirm who I was. And that’s normally how we would do it, right? – Yeah. – I mean, based on what we’re talking about. We are issued, wherever we live in the world, some form of government ID. And in a lot of situations, multiple forms of government ID. Identity card, passport, driver’s licence. – This happened to me too. This weekend, for a better reason, (laughs) I was in Southern China, dealing with some stuff up there in Shenzhen, and same thing. You cross over the border, not it’s actually quite advanced, anyone that’s crossed the border between Shenzhen and Hong Kong, you know, there’s biometrics, they take your fingerprints, they take a facial scan, they look at your identity documents still. It’s actually very regulated. And interestingly enough, if anyone has ever Googled me, you’ll see that my hair or facial hair often changes, so they always sit there and stare at the passport trying to identify– – Make sure it’s you. – Yeah, make sure it’s me. Like, who is this person in front of me? But you’re right, this is what happens to people on a daily basis in some instances. – Like us, you probably already have an official government-issued ID. But you may be surprised to learn that according to the World Bank, over 1.1 billion people around the world lack any officially recognised proof of their existence. In fact, this is so prevalent and critical that United Nations’ Sustainable Development Goals 16.9 is a goal that hopes to ensure everyone around the world has access to a legal form of identity. That sounds crazy, right? It means that one out of six or seven people do not have any form of official ID. – Without any forms of identification, it can be challenging to access institutions and opportunities that many of us take for granted, such as health care, social protection, education and finance, not to mention travel, driving, voting, employment, et cetera. Think about opening a bank account. Without any official documents to prove that you are you, chances are that you’ll not be allowed to open an account. And then again a bank account may be another pre-requisite for a lot of other things, such as getting a job, securing a longterm place to live. Overall, people without official ID risk being marginalised and excluded from society. – [David L.] And in the case of large refugee crisis, this can turn into a global political concern. According to the United Nations, it is estimated that 80% of the roughly 65 million refugees in the world do not have any form of official documentation to prove their identity. And while there are sadly many examples to consider, let’s briefly look at one recent example. The Syrian refugee crisis. – [David B.] Since the Syrian Civil War started in 2011, more than five million people have fled their country, often having to leave their belongings behind, including their official identity documents. In fact, there’ve been reports saying that 70% of the refugees lack basic identity documents. – [David L.] To address this issue and provide aid, the World Food Programme operated by the United Nations piloted a project called Building Blocks in the Syrian refugee camp Zaatari in Jordan, where they began utilising biometrics and blockchain to facilitate aid and to identify individuals. To be eligible to receive assistance, refugees have to record their biometrics which is then stored on the UN database and they are allocated an account on the ethereal blockchain. – [David B.] Purchases are then made through scanning their iris in their eyes at local shops, which confirms their identity and facilitates payment. No cash, card, or mobile is needed for the transaction to take place. – So, why should people like you and us, who already have government-issued IDs, you have your driver’s licence, maybe a passport, why should you even care about this? How will this impact you? The reality is that governments in most parts of the world are developing more and more of this technology into the concept of managing and controlling identity. This is an important question for all of us to consider as we think about the impact such technology will have on all of us. – Okay, so, what do you think? Do you think that moving forward these non-government-issued IDs will become the new norm? Or do you think this is more of an exceptional situation and that this type of change the pace of change won’t happen so quickly? And do you think that governments may accept outside, private, NGO-created identification systems or are they always gonna wanna control and create their own ID systems? Additional Readings 5.5.2 What Is the Future of IDs?
Obviously, the situation in Syria is quite compelling, and raises a lot of interesting questions that, perhaps, from one person’s perspective, who’s sitting at home, in a non-war torn situation, thinks that, well, that’s really sad for them, but how does that impact me? But, a lot of those technologies that they’re using, I think, do have broader ramifications, so maybe that’s something we need to take a second to discuss, and maybe unpack a little bit. – Yeah, so on the one hand, for those, for example, that are familiar with the refugee crisis as it spilled out of Europe, right. So we’re both American, as an example, a big part of the presidential debate from the last election revolved around Syrian refugees, and other refugees, and so there was a lot of acrimony, maybe even some fear that was stoked about how do we know who these people are? Is it possible there’s terrorists that are using this system to get into another country? And a lot of it boiled down to this idea of identity. So I think, these questions, very realistically, do permeate borders, but I think, to a point that you made earlier, it’s not just that this is an isolated experience in Syria, or as it impacts, places outside of Syria, but more specifically, these types of technologies, the biometrics in particular, are really permeating both national and private ID systems for everyone across the globe. Right? So I mean, you’ve crossed the border recently, I’m sure, like, are you seeing more and more biometrics being used at borders, to kind of control who is coming in and who is coming out? – Well yeah, sure, I think, it’s not just at the border, right? I mean, that’s probably the situation, the context where we see it the most, where we’re most cognizant of it. But, you know, every day I see my students use biometrics when they log into their computers or phones. – Yeah, thumbprint. – Increasingly, biometrics, as a component of identity will just increase over time. And, how we want to employ that, or how we want to control that will be important. Because, we’ve already moved just from fingerprint, now we have many new technologies now that do facial recognition too. – Yeah. – Right, and so, the underlying technology of that is biometric by nature. And so, how do we want to control that, particularly when that could lead to some unintended consequences with respect to who owns data, who’s monitoring who, who’s controlling that, how is that being stored, and, kind of the questions that pop out when you open that box. – So, some of you may have heard some stories relating to these types of biometric IDs, including using, say, Apple as an example, facial recognition software, et cetera. I think, there’s an interesting dichotomy. We talk about culture lag a lot. Some people are terrified of these new identification systems. They want to hold onto the passport, they want a physical representation. Other people are very very quick to kind of shift over, they say they’re more accurate, or safer, or whatever. One of the things that we saw with, just using Apple as an example is that, using facial recognition in your software, there was actually bias in the initial programmes for that, where, Asian faces, for example, were not identifiable, and so there was this kind of, people with different faces, could unlock somebody else’s phone, because the algorithm couldn’t recognise their faces being separate, right? So there’s the issue with bad data in gets a bad result, we know that. But then, even if there’s a small failure percentage, let’s say just one or two or three percent, right? Among refugees in Syria, for example, that would have represented hundreds of thousands of people, or across the globe, it’s the same thing. So if you have a bad password, you change the password, right? But if you have, the fingerprint doesn’t match up, how do you go through and actually change that process? Like, are we seeing, within the financial industry, or in terms of the technology, how are they addressing these types of very real, kind of human, or cultural challenges, as these technologies are developing? – Yeah, I think part of, at least, what I’ve seen, and what I’ve experienced, is, at least now, even with biometric data, and even, generally, statistically speaking, fairly low failure rates, relatively speaking. It’s usually a combination, right? So there’s biometric data, plus, something else that collectively creates your identity profile. I think that will continue to be the standard, so to speak, right now, as more and more companies, and even governments and nation states employ this technology. The combination of it kind of creates a bit of redundancy, just in case you do face those, either technology sided issues, where the programming is a bit deficient, or not up to spec yet, or, a situation where certain aspects of identity have changed. And we know certain aspects of biometric data can change. Or, fingerprints change, and things like that. And so, the redundant aspect of having multiple forms of identity on top of each other to create the full identity picture, I think that’s probably gonna be the approach that people take, and I think, other forms of biometric data, they get more and more advanced, will also be employed as well. – Mm. Additional Readings Pandya, J. (2019). Hacking Our Identity: The Emerging Threats From Biometric Technology. Forbes . Retrieved from https://www.forbes.com/sites/cognitiveworld/2019/03/09/hacking-our-identity-the-emerging-threats-from-biometric-technology/#177961785682 National Research Council (2010). Biometric Recognition: Challenges and Opportunities . Washington, DC: The National Academies Press. https://doi.org/10.17226/12720 . Indrajit, S. (2017). The Cybersecurity Risks of Using Biometric Data to Issue Refugee Aid. The Henry M. Jackson School of International Studies . Retrieved from https://jsis.washington.edu/news/cybersecurity-risks-using-biometric-data-issue-refugee-aid/ Aitken, R. (2018). Blockchain To The Rescue Creating A ‘New Future’ For Digital Identities. Forbes . Retrieved from https://www.forbes.com/sites/rogeraitken/2018/01/07/blockchain-to-the-rescue-creating-a-new-future-for-digital-identities/#674acdac5492 White, O., Madgavkar, A., Manyika, J., Mahajan, D., Bughin, J., McCarthy, M., Sperling, O. (2019). Digital Identification: A Key to Inclusive Growth. McKinsey Global Institute . Retrieved from https://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/Digital-identification-A-key-to-inclusive-growth 5.5.3 Will Governments Give Up Control Over Our Identities?
Okay, so the underlying question that we’re talking about in this entire module is whether or not FinTech can create a more decentralised system that would allow, in this particular instance, an external, maybe private organisation like a UN body, to create a globally recognised or cross-border kind of recognised ID system, or whether the entrenched powers within city governments, they’re going to want to kind of double down, and utilise this own technology for their own kind of internal systems, right. So I think the underlying question that a lot of people, myself included, think about is from a national ID standpoint, is the utilisation of these FinTech technologies about speed and efficiency, or is it about more security, I guess more lack of privacy and kind of controlling people. Now before he gets into this question, again, just use Hong Kong as an example, one of my other favourite things about living in Hong Kong other than the Octopus card which I showed you earlier, is that the Hong Kong ID system that we have is a chip integrated system and it makes it very quick, very easy to go in and out of the airport. And so as I travel, Hong Kong is probably the simplest border that I’ve ever crossed personally, and it makes going in and out very, very simple. For years, I saw that as a great advantage, but I do wonder, would I think the same thing as these biometrics are being introduced more broadly for a government maybe that I trust a little bit less, let’s say, right. What do you think? – Well, I think, you know, before we jump into that, taking a step back. I think it’s quite clear that that will be the case. That the– – Meaning, they’re gonna be integrating more and more. – More and more, right. So, you know, sometimes when you travel, you go to a country where they maybe not use that, it’s very traditional. Passport, look at the picture, stamp, and go through. I would say increasing many of the countries that I’ve been to in the last few years, though however, even if they were so-called developing countries, have incorporated some level of biometrics, even if it’s just a fingerprint scan into this identity portfolio, so to speak, that they’re constructing. And that would just continue unabated. That is the tie that’s coming in, that’s not gonna roll back out anytime soon. – Yeah. – Now I think to the broader question then of, how should that be controlled or regulated. You know, that’s really difficult to say because I think to your point, you know, there are definitely certainly countries out there that may use it solely for an efficiency standpoint. It’s just quicker to process people through immigration this way. – Right. – But I would say most countries will use it for a variety of different needs as things arise. Because once that data is there, it’s the rare organisation that can put in the brakes themselves to say, “Okay, we just wanna use data for this purpose and this purpose alone.” I think it’s unrealistic and probably a bit naive for us to think that data will not be used for other purposes. – Yeah. – Despite what anybody may say at the moment. – Yeah, and getting back from a regulatory standpoint, one of the things that we talked about before, if you recall, is who then is liable if privacy is breached, if the data gets out. One of the challenges with this in terms of the ID standpoint is that you have a national government that, you know, for example has a hack or breaches data. It’s often very difficult as the person who had your privacy invaded to then have any type of redress, right. They’re typically sovereign entities and so it’s difficult to go through. Okay, so for you, all the participants out there, what do you think? Do you think that at some point, maybe in the next 10 or 20 years, there will be kind of a globally accepted non-government issued identification, or do you think that governments are gonna utilise this technology and double down and use it to kinda control their own borders even more. Additional Readings 5.6.1 Inclusion vs. Exclusion
– Okay, so far in this module we’ve talked about power and control, in particular control over banking, currency, and our identities. We’ve also outlined how scandals and problems have caused governments and traditional financial institutions to lose consumer trust, further pushing FinTech innovation. – And we know that FinTech innovators are pushing for democratisation of finance, which largely centres on bringing financial products to the poor and disadvantaged. Through disaggregation, people are getting more information about financial products such as life insurance and mortgages so that they’re making more informed and efficient decisions to the increased opportunities that they now have. – And through disintermediation and decentralisation, FinTech is both cutting out the middle man who normally controlled financial transactions and giving control to multiple players simultaneously, respectively. Finally, by de-biasing finance, finance is hopefully removing traditional biases like race or gender from the process and making finance easier to understand and access. – The aggregation of these five principles is commonly summarised by one word. Inclusion. The drive towards greater financial inclusion is one of the key elements that makes FinTech so compelling and is making system-wide impact possible. For example, there are currently approximately 2 billion people in the world who are unbanked. Well, the World Bank has a goal to ensure they all have access to banking by 2020. And while that may still seem like a crazy, audacious goal, because of finance innovations it’s possible. – [David B.] According to the World Bank, “Financial inclusion means that,” quote, “individuals and businesses have access “to useful and affordable financial products and services “that meet their needs, “transactions, payments, savings, credit, and insurance, “delivered in a responsible and sustainable way.” Close quote. – Overall, FinTech has the potential to benefit undeserved communities and individuals through a large array of features, like cross-border remittances, payment technologies using digital know-your-customer processes, alternative credit scoring, e-wallets, mobile money, microfinance, crowdfunding, and more. And the Global Findex database by the World Bank has shown how the share of adults holding a bank account has risen from 51% in 2011 to 69% in 2017. That’s amazing and exciting when you consider that having access to banking is one of the key factors for raising people out of poverty. – And access to finance increasingly means access to the Internet and all things digital. This is perhaps most evident in China, where 890 million people are now using mobile phone payment apps and the transition to a cashless society has happened rapidly. These apps have a whole array of services that you can access with a few clicks on your phone. But the switch to a digital economy can also exacerbate exclusion. For example, in China it has widely been reported that the elderly are struggling to keep up with rapid technological advancements. As a result, what is supposed to make life easier for these citizens in some ways has done the opposite, particularly the 118 million elderly who live alone. All right, so, what do you think? Is it better to have an efficient financial system where everything moves online, everything’s digital, and it makes things more efficient, more smooth, and cheaper, or are we concerned enough about say the elderly or the poor or those that are disadvantaged not having access to a system? So, for example, should a store be forced to receive and accept cash payments, or can we go completely cashless in society? Additional Readings 5.6.2 Should Vendors be Forced to Accept Cash?
Okay so we’ve kinda explained that there’s this paradox, especially using China as an example, let’s say, where there’s a push towards kind of a cashless society, which in many ways is more efficient. But it also means it’s more difficult for certain people to access that cashless society, right? – Yeah. – So, in the case of, let’s say, the poor, the elderly, there’s reports now coming out of China that some of them are actually having difficulty accessing the information technology. – Or this digital, kind of cashless system. So what do you think? Now conversely, I guess just to lay the table even more completely, you may not realise that there’s actually the opposite thing happening in other more developed parts of the world, the U.S. as an example. So, very advanced cities of New York, Philadelphia, I think Boston in Massachusetts. They actually have the exact opposite rules. They actually have laws that say a store, or a vendor, must accept cash, and the reasons why they say they must accept cash is because, if they don’t, then it would make it difficult for the poor, let’s say, to access those commodities. In fact, those laws stem back all the way to the 1970s, when credit cards first came around, right? So you have this kind of dichotomy where on the one hand it’s more efficient, maybe even safer, but on the other hand, it kind of excludes people. What do we do about this? – Well, me personally, I think from both a practical perspective, as well as more of a kind of a normative perspective, I think it’s difficult to envision a situation where you go completely cashless in a lot of places. Partly because of the inclusion, kind of issues that you raised initiating your question. But from a practical perspective of, I don’t think governments are incentivized to say, hey, we don’t want you to accept, or we want to make it okay to not accept legal tender. – Mm. – To satisfy bills. – I think that’s a challenge when it comes to these kind of things. So, fundamentally I think there’s a government incentive to encourage cash payments, or at least as an option. So I don’t think these are necessarily mutually exclusive. I think as a package they need to be considered together. – As an interesting side note, I mean one of the examples that we provided in this, is the idea for tourists, for example, right if I was to go to China, as you and I have, or if we go to Myanmar, other locations, just as a tourist, whether you’re rich or not, you can have difficulty accessing the economy. I’m curious like, in a cashless society, how would a tourist access that market? I don’t really know, is there a solution to that? – Well, I don’t know either. I think there are parts, we try to access it. In countries that have fairly, financial systems that are fairly consistent or linked to each other, we get around that just by using credit cards. – Sure. – And so these are, anywhere you go, it’s pretty much okay. – And those are those stickers on the back that show like UnionPay or whatever, saying they’re like inter, you can. – You can globally, whatever their network. I guess the problem is, even in many respects, China is part of that network. – Yeah. – However, you know an example that we had last year was, even in a brand new mall, they wouldn’t take a credit card. – Yeah, yeah. – And so, if you start removing payment options, cash, even card, and go completely digital, by using basically devices, then that becomes a problem, right? – Yeah. – And then you start cutting off people, perhaps. And you know from a holistic perspective, maybe you’re okay to say, hey, we don’t really care if people like you who are only here for two or three days are integrated into the system or not. That’s a judgement that, certain financial ecosystems and countries can make. – Yeah. – And you know, I don’t think we’re in a place to tell them to change. But I think it’s worth a question. And it’s worth kind of thinking about like we’re doing. – Yeah, so maybe for all of us, we’re actually asking the wrong question, and looking at this from the wrong perspective, right? Because in reality, we’re asking this question from the perspective of people that currently use cash and have used cash. But if the problem is access to that digital technology that allows you to have digital payments, maybe we should be looking at it from the perspective or asking the question, if it’s that important, should the government actually provide a digital device to every single person, right? So just like the idea of being regarded as a utility, because it’s so critical, maybe there’s a government issued smart phone, right? That everyone would have, and therefore have access to these economies, who knows what’s gonna happen in the future. Clearly that’s not gonna happen now. But I think that may, at some point, if it truly is a cashless society, you’d have to have the technology to catch up to that, otherwise that culture lag would mean that, potentially a big large percentage of your population would not have access to that cashless society. – And I think if we, in some far of future, if we actually move to that, then we’d have to think about occasions where we would need contingencies. – Mm. – Where, in a large natural disaster. – Sure, oh yeah, yeah, yeah. – I mean then how would society? – Which is true for us by the way, ’cause again, banks are like IT firms essentially. – That’s right. – If there’s a big power outage, or if you couldn’t access the bank money, which a lot of it is not physical currency, the same thing can happen. – But which is why I think in the face of certain, those kind of disasters, people will go to a bank and pull cash out prior to a large hurricane coming, go stock up on food. – Yeah. – And we’ll kind of do these as a precaution measure, and kind of in an extreme sense, there are still people in the world who still stockpile a little gold at home just in case they need to try to escape or something happens and. You know, I’ve run into people who live in fairly developed places that still. – Yeah gold coins. – Yeah have little gold bars, gold coins, different things that they can use in case of a contingency situation. I think we’d have to account for that as a society before we kind of move down that path. – Yeah, great. Additional Readings 5.7.1 Global Impact of FinTech
– Now, in this module, we’ve primarily considered the disruptive nature of FinTech at the individual and national level. But we wanted to end the module with a quick discussion about the implications at the global level. – So, let’s revisit again the example of David Bishop visiting the street vendor in China to try to find his breakfast, this time from a more technical standpoint. The peer-to-peer payment system utilised by the vendor and the costumers that day was significantly more advanced than what is typically utilised somewhere like in Hong Kong, even though Hong Kong is generally considered a global financial centre. If you think about it from a FinTech standpoint, China and other developing nations have in many ways been able to leapfrog the legacy infrastructure seen in developed countries. – This is partly due to the fact that developing countries do not have as large of a financial and cultural stake in traditional finance infrastructure, so they’ve been able to move past the developed world in some ways. For someone who has never used a peer-to-peer mobile finance application, for example, doing so may seem insecure and scary. But for someone who has only known banking on their smartphones, writing and sending a cheque may seem not only archaic and slow but even more insecure than using the peer-to-peer transfer system. – Many developing countries who are now working to improve their financial infrastructure will completely leapfrog much of the physical infrastructure that defines the finance industry in most of the developed world. For example, many customers in Africa and Asia will maybe never use an ATM nor will they ever step foot inside a physical brick and mortar bank. – [David B.] Now, as a personal example, over the 12 years that I’ve lived in Hong Kong, many businesses in the US have been reluctant to receive wired transfers from my overseas bank, preferring instead for me to send a cheque to them. Now, I protest, explaining that doing so would be extremely slow, very expensive, and more susceptible to fraud or theft. But because this is their legacy payment system, they stick with what they know and trust. – As a result, some of the most advanced economies in the world are actually the slowest to adopt FinTech innovations, lagging behind many countries in Asia and Africa in regards to FinTech. – In the context of identity, for example, countries such as Kenya, Bangladesh, and Guinea, who though lacked these legacy identity systems, are building digital identification systems, and many other government services are going from basically non-existent straight to digital payment or record systems. For example, last week I was in Myanmar and was incredibly impressed by a startup there called Koe Koe Tech. As many of you know, the country of Myanmar was only opened to the outside world in 2012. And as a result they have an enthusiastic but underdeveloped government sector that in many ways lags behind the rest of the world. But Koe Koe Tech and other companies like it are helping to digitise the Myanmar government systems. As a result, they may end up with a more advanced digital government payment and record-keeping system than advanced countries like the US which are now burdened by ageing technical infrastructure. – [David L.] It’s important to remember that while much of the world is talking about the Fourth Industrial Revolution, some of the world is still struggling with implementation of technologies that came about around the time of the First Industrial Revolution. Using Myanmar again as an example, seven years ago Myanmar had a nearly non-existent formal banking sector, but in just six years they’ve attained 80% smartphone penetration and have seen millions get access to mobile financial services. This is quite remarkable if you consider that over 30% of Myanmar’s population lives below the national poverty line as of 2015. And hopefully with further advancements in FinTech, the amount of poverty will continue to shrink and access to financial services will continue to climb. – Similarly, in 2016, 33 million Kenyans owned mobile phones but 26.7 million had registered mobile transfer service accounts. It’s amazing. This was made possible because traditional banking systems were not accessible to low-wage earners and banks were not accessible in remote areas. So throughout Kenya they’re cutting out ATMs, reducing them by about 30% so far and only leaving around 2,000 ATMs in the entire country. This is a place where, according to UNICEF, access to basic quality services such as health care, education, clean water, and sanitation are often a luxury for some people. Now, their access to mobile banking though is really remarkable and a great sign of things to come. So, what do you think? Do you think that developing countries have an advantage of some sorts from a financial infrastructure standpoint? Additional Readings 5.7.2 Will Developing Countries Leapfrog in FinTech?
So, the next question maybe we want to think about is, do we think developing countries, due to the lack of infrastructure, does that give them some sort of advantage in rolling out or accepting some of these new technologies that we’ve been discussing? So what do you think, Dave? – Yeah, so this I think is one of the most interesting questions because here you have this paradox where the things that perhaps in the past would have held back a developing country like physical infrastructure, bridges, roads, locomotives, whatever. Now, from a FinTech perspective, or just the financial infrastructure, it may actually turn into some sort of an advantage, right? Now so I’m a big fan of the NBA, right? So I like basketball. And if you think of it like a team perspective, you often have like ageing superstars where there’s a lot of money put into their contracts and as they get older and they become less effective, it ends up like weighing down the team because there’s a cap on how much they can spend, right? And so as a result of that, it’s like these teams that have a group of young stars, people say they actually have a better future, because they’re able to capitalise on those stars for a longer period of time, right? It’s kind of like that from like a macro-economic perspective, now again, this is not to say there are not challenges, right? This is not to say like, you know, say Kenya or one of these places we use as an example, they certainly would be willing to change places with say the US for certain aspects of this, but it is super interesting to think that the infrastructure they’re putting in now, theoretically could help them develop and advance at a quicker rate than we’ve ever seen before. – And I think maybe it’s important to clarify too, I think generally I think everybody agrees that developing countries should have, broadly have good infrastructure, roads. – Yeah, of course, of course. – And those kinds of things. But I think specifically what we’re talking about is the type of infrastructure directly related to the adoption of technology. – Just making payments, yeah adoption of technology. And of course there has to be governance behind it and stuff. – And so I think kind of one of the key intersections of this thinking about this comes down to countries that have rolled out basically phone lines, data lines versus countries that haven’t, right. And that is part of infrastructure, but it’s very interesting to think about, you know, there are companies in the world who focus on hey, how can we deliver similar types of service without having to build a wire, put wire down or put up large towers where we’re stringing wire across, you know these kinds of things. And I think that’s interesting, right. And so a lot of developing countries in Southeast Asia, Africa, are kind of thinking about how can we deliver basic internet without having to put in all the infrastructure that initially a lot of quote unquote developed countries now did 20 or 30 years ago. Which now we rely on as kind of the internet network that we have. – Or when they do put in the physical infrastructure, it’s at a much, much higher speed. So in a lot of developing countries, their internet rates, the speeds are actually quite a lot faster, because you’ll have an internet service provider in, let’s say, in the U.S., that has spent a lot of money on that physical infrastructure, and they’re essentially like a monopolistic entity there. They don’t wanna go out and rip out those old cables and put in newer, faster ones. They just wanna kind of milk it as long as they can. And so you have these slow internet speeds in throughout the U.S., let’s say, versus a country that just got internet for the first time, it’s immediately faster. – Yeah, and I think that really brings us to the crux of the question about, is that somehow an advantage in some ways, because, maybe because a family somewhere that maybe is, doesn’t have access, has historically hasn’t had access to some of this technology. Now instead of having to wait for a phone line to be put in, that they can use to have a landline for, you know, a landline for internet, all these things. Now, instead of doing all that, everything is through their phone off a, you know, kind of a signal, yet increasing in 4G, 5G, these kind of advances in technology. And those signals allowing them to have internet speeds as fast as, and as high-quality as if you were just using a desktop computer at home. – Yeah. – Connected to a wired internet line. – Yeah. So, the example that I used with Koe Koe Tech when I was in Myanmar last week, I was super, super impressed with this startup because here you had a Burmese American lawyer, went back, started a company, and he’s essentially trying to digitise their entire government services, right. So tax payments, utilities payments, right. Those are things that even here in Hong Kong, we’re still paying physically by check, by paper, by fax, whatever. And it was so cool to see that he had this team of developers that were essentially digitising the entire process. Now, make no mistake, it’s still slow. It’s still inefficient, they’re just getting started. But the reality is, if those get in place, I mean, these are the types of legacy systems that are costing the U.S. government trillions of dollars essentially in non-digitized medical records, non-digitized tax records, et cetera. – Yep. Additional Readings Module 5 Conclusion
– Okay so throughout this module five, we’ve kind of used the same analogy of me going into China, trying to buy breakfast from a woman on the street selling breakfast, and showing how difficult it was to kind of access that. So this is kind of an example that we’ve shown over and over to talk about the decentralised, democratised, expansive nature of Fintech solutions. And how they’re kind of changing the world. Some at a rapid pace, some at a less rapid pace. We’ve talked about how, you know, in developing economies, sometimes they’re leap-frogging technologies that exist in developed countries, and we also talked about how some of the legacy systems on both the financial institution side, say big banks, et cetera, and now more recently on the TechFin side are changing, and we have to from a government regulatory standpoint, perhaps rethink the way that we really classify or even regulate these big companies. – At the heart of a lot of these new technologies and these advances in FinTech that we’ve been discussing, is the importance of using the concepts, the models, these five D’s and other principles that we’ve talked about in this module to assess new technologies. How they’re going to impact you individually as well as all of us collectively, to really understand is this going to lead us to a better place, or is it just going to lead to another kind of concentration of power and influence, just in a different form. – So in the next module, we’re going to explore some of these topics even further and really dig into some specific examples of how people are kind of utilising this technology for good and maybe some of the negative, unintended consequences that occur when we don’t think about the application of these technologies. Module 5 Roundup
– Welcome back for our week five roundup. It’s amazing how fast the time has gone. We really can’t believe we only have just one week left in the course, and we’ve been told the student participation for the course has been really high compared to some other online courses, so we genuinely appreciate so many of you are taking this learning journey seriously, both at an individual level but also your willingness to participate in thinking about some really serious questions that impact all of our futures. – Now, as we near almost 6,000 students registered for the course, we’ve been humbled by your positive feedback. If you feel like this course has been a good experience for you, please do share within your networks as we think it’s important as many people as possible think and debate these issues. Just recently, the United States announced that it would commence antitrust investigations against some large technology companies such as Amazon, Google, and Facebook, partly related to data privacy and overall size concerns. So we are already observing the real world implications of the concepts that we’ve covered in our course. – And as you know, in module five we discussed what we referred to as the decentralised future. Basically we were trying to grapple with understanding how the future might look and how best we digital citizens can participate in crafting that future. Now, from the comments in the course forum, it seems like many of you have been thinking about similar questions, so we wanna jump into a few of your comments. – So, Dave, let’s start with our first question, which relates to one of the first principles related to the module, which is, do we think, or do you think that FinTech, do you think it’ll bring the world together or drive it further apart? Where do you stand on that? – Okay, so, regardless of where I stand, first I just wanted to point out. I thought it was cool when reading through some of the comments that you as the course participants looked at this question very differently. So, some of you said that it’s gonna bring the world closer together because it makes it easier to, for example, purchase things online with digital payments, and basically saying I can contact anyone and do almost anything from inside my own home. That was kind of the digital connection. Whereas others of you said that by virtue of that digital connection it is inherently driving the world further apart because you’re not actually seeing people. You’re not going to the store, you’re not going out and writing letters to people and what. So I thought that in and of itself was a very interesting kind of dichotomy or paradox in the way they looked at it. And I think both are accurate and neither one is necessarily good or bad but both do come with I think positive and negative aspects to it. So, there were some specific comments. So, Ying2658 said that “FinTech brings distant people together but pushes further for people around us.” I think what Ying was saying is that it can connect people from around the world but the people that are close to us are in some circumstances actually pushed further away. Or as the way that some have said it is that we are actually looking at our digital friends at the exclusion of our actual in-person friends. Have you seen something like that? – I think the first part about how we’re more connected to communities that are farther away, I think that was the original, what people thought were the benefits of globalisation. So if we look back and look at people like Thomas Friedman who’s a New York Times columnist and wrote The World Is Flat, he wrote this book that was a best-seller I think maybe 15-20 years ago. This is basically what he’s talking about, where we used to be, our relationships were heavily constrained by the people we could meet physically and now that’s not the case and we see that. And I think that aspect certainly is occurring. I do think on the flip side whereas the people who are more proximate to us physically, those relationships may not be as strong, we’re definitely seeing aspects of that too. We see that in aspects of people thinking about, like video game addiction, for example, and the lack of interaction that occurs in real life because people are on devices. So those kind of activities, I guess, I think are what a lot of people are getting to. But even everyday occurrences. So in places like China where there are super apps where people are getting food delivered, even coffees delivered, where normally you might have walked down your neighbourhood coffee shop or your Starbucks or whatever and have some even little interaction with the cashier, with somebody standing in line. Even those micro kind of interactions I guess are falling by the wayside now. And so I don’t know if that implies any greater societal problem. Perhaps. But it’s certainly something to think about and consider. – Yeah, and it’s not FinTech specific but there are a lot of studies that are coming out showing from a psychological standpoint, mental health standpoint, even just general physical health standpoint, that we are gonna have to adapt. We’re gonna have to maybe recognise some of these challenges. I think, on a very personal level, I’ve got three kids. You walk home, headphones are in, and everyone’s doing their own thing. It’s cool from a certain extent because I feel like. – You don’t have to talk to your kids. – Well, yeah, there we go. Bonus. No, but they’re connected with their family from all over the world, they live in a global community so they’re connected with people from all over the world. But then they’re also learning things from all over the world. Like TED or edX and things that are really bringing people together from a knowledge standpoint. But it is interesting how quite often my 12-year-old daughter will text me from across the room. Mostly for fun, I think she’s kinda of messing with me, but at the same time that is a very real occurrence. So we have to weight the good with the bad and from a cultural lag perspective I think kind of narrow that gap. So, there was one other aspect of this that came out in terms of, there was a comment from JudahFrankel, Judah Frankel, not sure, Judah Frankel, and it was a really interesting comment, I think. Really spot on. He took the scenario of me being in rural Western China and trying to purchase breakfast and saying how, yes, recognising there were perhaps some impediments and challenges in terms of using the technology to purchase the breakfast, but very insightfully also pointed out that had I just used cash that also would have had a whole list of challenges, including the RMB is not convertible so I would have to be able to go and have a bank account or some process whereby I could not only pull out currency but actually convert the currency, et cetera et cetera. And so his comment was, well, look, is this really any harder? And in fact, once you set up that technology, it gets easier and easier. And I do think that that was accurate. Really insightful, even. And in my comment to him, the written comment, one of my points was simply that, well, it’s not that one is necessarily better or worse, and indeed I think we could all agree that many aspects of the electronic transactions are significantly simpler. Once you know how to do it, once you’re in the ecosystem. And my point though to him was that just by virtue of the fact that it is different, by virtue of the fact that it is new, that means it’s inherently going to be distant for some people and it will make it hard, again, for perhaps the poor or the elderly, minorities, migrants, and even tourists. You could have a wealthy tourist who’s just not part of that ecosystem that has difficulty kind of accessing it. So, from that standpoint, any thoughts on that? – Yeah, well, I think the entry point I think is important. So, true, the conversion and all that stuff that you mentioned I think would have had happened. But I’m sure if push came to shove and this lady really wanted to sell you whatever she was selling you and all you had were US dollars even, and she was taking cash, let’s assume she was accepting cash, if you gave her enough US dollars, she would probably give you– – Give her my wedding ring. – Yeah, she would probably give you the food. – Do some dishes. – Right, and so. I think the idea of entry point in terms of how we enter the ecosystem is important. So, the barriers to entry for a cash economy are quite low. And it’s not even constrained to just, oh, you have to have that currency. We were just talking before we started filming about this situation I had a visa issue going to Vietnam and it was so interesting ’cause when you showed up at this particular airport in Vietnam, they were like, “You have to get a picture for your visa.” But people didn’t have Vietnamese dong or whatever so they were like, “Oh, you gotta give us $5.” And people were like, “Well, I don’t have any US dollars.” “Give to us in whatever currency you have.” And that was okay because it was cash. And so I think to your point, I think that’s right. I think the barrier to entry is important to think about because there’s a higher barrier to entry now in China for someone who’s not in the system. – Yeah, I would argue that. – Yeah, right. Which is my opinion as well. Whereas in a pure cash economy for example, or a hybrid where cash is still highly readily accepted, the barrier to entry is still quite low. – And the only thing I would add as well, and I agree with all of that, is that because the infrastructure’s in place already and has been for quite a long time, if you use the example of a tourist, if they get there, there are ATMs waiting for you, there’s currency conversion there kind of waiting for you. And when you think of the mobile device, yes, it’s easy to use, doesn’t take very long, but the reality is for, as you said, the barrier to entry for a poor person to access cash versus acquiring and using a mobile phone is actually quite significant. And we’re very far from the time when there will be mobile phones waiting for us at the airport like an ATM. Because it’s not like one machine can do all that for everyone like an ATM or a currency conversion kiosk could do. So, again, great points. And I’m sure as time goes on people are going to look at my scenario and actually look at the cash as probably the more archaic, time-consuming, thinking oh my gosh, they actually had to go through that process? But for now I think for a lot of us out there there is a little bit of a barrier to entry in terms of the technology, and so we’ll see as that adapts and changes going forward. Okay, so, the next kind of question or theme that we wanted to go through was this concept of cryptocurrency and particularly in terms of a potential decentralised future. Now, we asked, do you think that banks and potentially even governments would move more in the crypto direction, and if they do, will that lead to a more entrenched kind of traditional forms of power, meaning the banks and governments just use that and get stronger, or is this going to lead to a more decentralised future the way the founders and geniuses behind these cryptocurrency originally envisioned? So, Dave, what do you think? – Well, that’s a pretty heavy question and I don’t think we have the full answer. – Nobody does. – And if anything it’s probably at least three or four different questions that were encompassed in that. But I think maybe we start with kind of a recent news event and then kind of start unpacking it from there. So, I think as probably most of our participants have read in the newspaper or magazines or online blog or whatever is that Facebook has recently shared their plans to launch their own stablecoin, which my understanding is it will be tied to basically a basket of different currencies, and they’re taking basically $10 million investments from a variety of different types of stakeholders. Financial companies, other technology companies, et cetera. – It is huge, by the way. – Yeah, which is huge. Because if they do roll it out across all the messaging platforms, you know, Facebook Messenger, WhatsApp, and Instagram, in theory they should have the largest payment network in the world. – And we should point out, this is happening because of this course. We pushed the, I’m just kidding. – So, but yeah, so. It will be interesting to see how that plays out. What kind of influence that has on how governments as well as individuals perceive these kind of payment systems. – And how governments will perceive Facebook. – Yeah, exactly. So this is a core question because I think we had mentioned at the beginning, at the outset of the roundup, that they are undergoing some scrutiny related to antitrust. And this alone, this new stablecoin that they’re producing, alone would not trigger antitrust violations by itself, but I think in that overall political environment, again, it’s another red flag that politicians and regulators will draw on to say, hey, how should we be treating you, guys? Should we be treating you like a utility, which is something I think I’m sure we’re gonna discuss in a second here, but should we treat you like a bank? I mean, if you have a network that allows a few hundred million people to trade payment– – Yeah, like leverage requirements or something. – And the reality is that you can actually hold that value in some way. So in the sense that you’re acting like a bank, should you be subject to similar scrutiny as a financial institution? So this raises a lot of questions on the regulatory front. Maybe large technology companies, at the high level when they talk about this they don’t wanna be regulated like– – Of course not, yeah. They don’t wanna be regulated as any of those things. Antitrust, utilities, they don’t want any of those things. – But if the decision is made of, well, it’s gonna happen, then you start thinking about the side effects of that. Does that become a standalone business? Does that stay part of Facebook and they get regulated? And even if they did get regulated, so my question when it comes down to it when people talk about, oh it should be regulated like an utility, it should be regulated like a financial institution. That’s all fine and let’s say that happens, but does that automatically mean that it won’t be as big and influential as it is today? If anything I feel like that could potentially give them a licence to even be bigger and more influential. – Yeah, because if you think about it, utilities are traditionally kind of monopolies or some type of cartel. You have this limitation on how many utilities you can have. An interesting side note to that is utilities are usually very geographically specific. It’s not like there’s gonna be a world utility any time soon, and so what does that even mean? If you did it from the US context, would every single country regulate them separately? Which is probably the likely future. – At least in the foreseeable future. – Yeah, yeah. It’s gonna be super interesting and I have no idea how they’re gonna deal with this. I do know that, as we’ve seen even earlier in the course I think we may have talked about it, but Mark Zuckerberg and his entire team are trying to get out in front of this. He of course published an op-ed in The Washington Post about this, saying, “We know we’re probably gonna have to be regulated more. “Here’s what we think that should look like.” You have the COO of Facebook is reaching out and publicly stating that regulating it more or, sorry, breaking it up would be a nightmare for everybody. They’re really trying to get out there and kind of lobby not only actually lobbying for the government but actually lobbying to the public saying, “This is why you should keep us together.” There was a great article recently, I think it was The New York Times, where it showed the behind the scenes lobbying that big tech firms mostly from Silicon Valley, the amount of lobbying pressure they’re placing on Washington, DC, right now, chunks of massive amounts of money going in to I guess drive the dialogue and push their story, their narrative. – And the interesting I think dynamic that’s occurring there is one I think they’ve been preparing for this for a while. – Oh yeah, oh yeah. – So in terms of bringing over folks who have some expertise in antitrust regulatory action and things like that. But I think the other thing that is really interesting is at least as far as the US political spectrum goes most of these technology companies that we’re talking about tend to be liberal yet most of the folks who are bringing up this regulation are also liberal politicians. And so it’s a very interesting dynamic that is occurring and it will be interesting to see how it plays out. – And just to give you a specific example. I think San Francisco is super interesting in this regard because you have one of the most liberal cities in the United States with one of the highest per-capita income cities in the US where is largely the home or at least nearby many of these technology firms and yet, what do you see? The State of California recently put in rules against kind of the gig economy saying that companies like Uber are gonna have to treat them like employees. You have facial recognition being banned throughout San Francisco. And so it’s this interesting dichotomy. In fact, on a more personal level, you see there’s all these reports of Apple and other very senior technology executives that intentionally send their kids to schools that don’t use digital technology. So it’s just interesting, this kind of dichotomy. – And so, I guess more to the broader question, then. Will governments and central banks really ever give up this idea of central currency? I don’t think so. I mean, a large part of what their fiscal policy, a large part of monetary policy, large parts of other types of government policy that deal with government revenue and things, all tie into this idea that they have control over money supply, they can print money, they have control over interest rates. And, again, different countries have different systems so we’re just talking broadly here. Would governments ever give that up? Not willingly. – Yeah, of course. – And if we– – Well. Yeah, yeah. They’d wanna control, they would probably allow it but it’s gonna be defined on their own terms for their own benefit. – So, if we use a non-technological example, in my mind comes the European Union where as countries join the Union form an economic perspective they lost economic sovereignty, which lead to some problems for some of the countries in the Union right now. – Yeah, yeah, yeah. – And so I think if we apply that through a technological lens, then would there be some of the same issues? Surely, I think. And so I think governments are hesitant to potentially walk down that road without knowing. I don’t think anybody wants to be first. I don’t think any major economy at least wants to be first. – And I would say, if there was one, I was speaking with a guy about this recently, I could definitely foresee the US pushing this, specially with large US financial institutions, in order to ensure the first-mover status and kind of ensure the US dollar kind of remains the world’s fiat currency. Because if you did have a US-backed dollar crypto going forward, and especially if it’s broadly adopted, that could potentially give them a further extension as the global currency. – So, I think the next iteration of this in my mind, or the next step of this towards merging traditional fiat with a wider-spread use of various forms of crypto would be some sort of stablecoin. So what Facebook’s doing with Libra, stablecoins that maybe a particular institution puts out that is maybe pegged with US dollar or a basket of currencies that are heavily traded like Yen, Euros, whatever, and then you could see that actually being like a partial next step to how countries and economies try to merge the fiat and cryptocurrency divide. – Do you think the name that Facebook chose for the crypto is kind of interesting? – Yeah. – Like, Libra, I mean, I’m assuming. Is it L-I-B-R-E? – L-I-B-R-A. – R-A, okay. So I may be completely off here. I’m assuming is based on freedom and liberty? – That was my impression. – Yeah. ‘Cause we’ve seen that. We’ve seen the Marshall Islands currency when specifically talking about sovereignty. So it’s interesting that even in the way that they’re naming these coins, a lot of them are focusing on this idea of freedom and democratisation of finance. Again, I may off on the Facebook thing. I will find out. I’m sure someone’s gonna Google this and let us know. One thing that personally I’d like to see, we keep talking about a utopian future, and on this particular topic I think one of the most interesting ideas, if we do go into the route of a public utility and if we do say that since these really, really large TechFin platforms I guess by global comparison only serve a small number of stakeholders. There’s fewer employees. They’re really creating massive amounts of wealth for a small number of people. Then if you do look at it from what’s known as the UBI, the universal basic income standpoint, could you have these massive companies like Google, Facebook, Amazon, Tencent or whatever, that are essentially either nationalised or created like utilities where part of that wealth is actually pulled off and then given down to normal people in countries around the world? There’s actually a presidential candidate in the US right now who’s running in large part based on this concept of UBI, universal basic income. And he said part of that funding, of course he’s not gonna win but it’s an interesting idea, that part of that funding would come from essentially taxing these large organisations, which for the most part don’t pay any federal income tax. Now, as we wrap up this roundup, just a quick administrative announcement. For our verified learners who are pursuing a certificate, the final assignment is the next module. In this assignment, we want you to utilise the framework and knowledge we have introduced in the course to analyse a FinTech product or solution of your choice. We hope this serves as another wonderful opportunity for you to learn from other excellent peers from all over the world. And so accordingly the essay is gonna be peer-assessed. Each of you will grade two other peers’ work and also receive two sets of grades and feedback. And we can’t wait to read your great work and wish you the best as you progress to earning your certificate. – Now, thank you again for being part of this learning journey. It’s been a great experience for both of us and we hope you enjoy the final module. We look forward to wrapping the course with you next week. Module 6 Positive Impact of FinTech 6.1.1 Can Technology Be Good or Bad?
In Module 5 we talked about some of the big-picture, foundational philosophies used by FinTech innovators to support and justify their work. For example, we talked about whether FinTech should aim to create a decentralized, more democratized finance industry, or whether FinTech innovations would further entrench traditional sources of financial power. In that module we focused on control and power – particularly who controls banking, currency, and our identity – with a specific focus on how to ensure FinTech innovations provide greater financial inclusion. We ended the module by considering the speed of these innovations, particularly in the developing world, noting that in some less developed countries FinTech has already advanced considerably beyond what is available in developed countries. So now we want to move on to our last module, and consider some of these innovations in real-life contexts. But to do that, first, we want to start with a question: can technology be good or bad, or is technology always neutral? Let’s look at this using a specific example. Take a moment and think about one specific form of technological advancement: nuclear power. What are your immediate feelings about nuclear power? Is it good, bad, or just neutral? On the one hand, nuclear power is among the cleanest, most efficient sources energy ever discovered or utilized by mankind. Using material only about the size of a small cube, like a Rubik’s cube, you could generate enough electricity to power an entire city for decades, if not centuries. France derives about 75% of its electricity from nuclear power. As a result, France has cleaner and cheaper electricity than just about any country in the world. In fact, France even exports enough excess electricity to generate over 3 billion Euros per year. And yet in countries around the world, nuclear power is one of the least popular forms of electricity generation. When I asked the question, some of you may have immediately thought of disasters like those that occurred in 2011 in Fukushima, Japan, or in 1986 in Chernobyl, Ukraine. Or maybe you didnt even think about nuclear power in terms of electricity generation at all. Maybe you immediately thought about its even more destructive use: nuclear weapons. There is no denying the potential of nuclear power for both good and bad. But the questions remain: is nuclear power itself good or bad? Can technology ever be inherently good or evil? Or is technology always inherently neutral? One common debate that has occurred in countries like the United States around this question concerns guns. Typically the question is asked this way: do guns kill people, or do people kill people? And while this may seem like a silly debate to many of you, the underlying philosophical question has been at the heart of technological advancements for millenia. As an example of how long humans have been asking this question, let’s consider a term that you may have heard before: Luddite. Have you heard the term Luddite before? I remember the first time I called David a Luddite the term basically means someone who is resistant to adopt new technology. The term Luddite is thrown around a lot in tech circles, but do you know the origins of that word? The Luddites were actually a secret oath-based society from 19th century England. They were worried that their jobs were going to be taken by machines, particularly in the textile industry, and so some believe that the Luddites went around breaking machines to ensure they wouldn’t take over their jobs. For many Luddites, the machines themselves were the enemies. And damage to machines was so prevalent that in 1812 the British government even created a law that made machine breaking a capital crime. Ok, let’s bring this back to the FinTech world. We have discussed the challenges and benefits from several FinTech innovations, such as the blockchain and facial recognition software. We have discussed multiple cases where blockchain based technologies have been used for illegal behavior. But should that mean that blockchain itself is bad, and should therefore be made illegal. Or were the people who utilized the technology for illegal purposes the real perpetrators? Or concerning AI, should we be more concerned about the underlying technology that makes AI possible, or should we focused our attention on who has the ability to control the use of the technology? On that particular question, even Stephen Hawking, who has famously warned about the dangers of AI, noted that the short-term concerns about AI depend more on who controls the technology rather than the AI technology itself. So here is the question. To the extent that we have concerns about FinTech innovations, should we be more concerned about the technology itself, or those who control and use the technology? To the extent you have concerns about AI, are you more concerned about sentient AI itself, or the people or companies who will control the technology? Whether you believe that technology can be inherently good or bad, or whether it is inherently amoral and neutral, there is no denying that technology has initiated massive social changes over the past two centuries. The theory of technological determinism considers how technology has impacted human action,culture, and thought, and argues that technological advancements have been the primary source for social change. One element of technological determinism is an ideology called technological utopianism, which is the belief that science and technology will bring about a utopia where scarcity, suffering, and even death can be overcome. As we stated in the very first module of this course, we currently stand at the precipice of change. And we as humanity have some really important decisions to make. Will we utilize the power inherent in FinTech innovations to build the type of utopian society that science fiction has imagined for us, or will we use these innovations to divide, control, and disempower? Although this course may seem somewhat pessimistic or cynical at times, the reality is that we are extremely optimistic about the future. And we truly believe in the power of FinTech to bring humanity together. So for the remainder of this Module, we are going to consider some of the ways people are creating FinTech solutions and building FinTech-based business models solely for the benefit of society. Do you believe that technology can usher in a utopian society? Should businesses have a social responsibility and generate some good for society, or should business only be responsible to generating profits for shareholders? Or maximizing shareholders’ values which is commonly referred to Should FinTech entrepreneurs consider the social implications that will come from the use of their technologies? Utimately, whose responsibility is it to ensure that technology is being used for the best interest of society? And that’s something that we should even consider. Additional Readings Kang, S. (2017). Can Tech Ever Really Be Neutral? World Economic Forum . Retrieved from https://www.weforum.org/agenda/2017/09/can-tech-be-neutral-gnowbe/ de Castella, T. (2012). Are you a Luddite? BBC News . Retrieved from https://www.bbc.com/news/magazine-17770171 Higgins, A. (2018). Stephen Hawking’s Final Warning For Humanity: Ai Is Coming For Us. Vox . Retrieved from https://www.vox.com/future-perfect/2018/10/16/17978596/stephen-hawking-ai-climate-change-robots-future-universe-earth Chandler, D. (2014). Technological or Media Determinism. Visual Memory . Retrieved from www.visual-memory.co.uk/daniel/Documents/tecdet/tdet08.html Hurme, P., & Jouhki, J. (2017). We Shape Our Tools, and Thereafter Our Tools Shape Us. Human Technology , 13 (2), 145-148. https://jyx.jyu.fi/bitstream/handle/123456789/56220/1/hurmejouhkihumantechnologyv1322017.pdf Fink, L. (2019). Larry Fink’s 2019 Letter To CEOs: Purpose & Profit. Retrieved from https://www.blackrock.com/corporate/investor-relations/larry-fink-ceo-letter 6.1.2 Can FinTech Usher in a Utopian Society?
So, that was a lot of questions we just asked you to consider, so let’s ask you, Dave. Let’s start with you, what do you think? Does technology have the capability or potential to usher in a new era of utopian society for us? – I think so. I think, empirically you could look at the world right now and certainly over the past 200 years, really over the past 50 or 60 years, there’s been more progress in terms of most empirical measurements for what a good society would be than we’ve ever seen before, right? So FinTech also has the ability to kind of usher in some of these changes that can make society even better, bring us closer together, hopefully help with issues of scarcity, financial inclusion, but I do think that there’s a legitimate question about can this happen in the existing structure with the businesses that we have today? So I guess, to bring it, for those that are out there, one of the fundamental questions that people have been asking for decades now is, do businesses have a social responsibility and should they be generating some good for society in some regard, or is there a responsibility simply to generate profits for shareholders, and I think this is kind of an underlying question that will determine the question of whether we can use technology – Utopia. – to become a utopian society. So what do you think? – Yeah, so, I think a lot of people nowadays think, oh, this is like a new movement, where like, oh, we have ESG so environment, social, and governance factors that kind of play into investors’ mindsets and things like this, and this is somehow new, and I think what’s important to point out is, actually the debate around what the purpose of a company, what the purpose of business should be has been going on for at least a century, if not longer, about, should it be for shareholders or should you be focused on stakeholders? Should profit, pursuing a profit solely be the focus or should there be broader considerations? I think it’s actually, in my opinion, it’s actually quite settled now. Outside of some rare situations, there’s no legal requirement to pursue solely profit maximisation, well, for most companies. Additionally, I think, you have people, ostensibly someone like Larry Fink, who is the CEO of Black Rock, largest independent asset manager in the world, who by the very nature of the business that he does should be solely profit maximising, but actually, the last few years, if you read these letters that he writes annually to CEOs, he’s talking about the broader purpose that businesses and companies have to society, beyond just profit maximisation. – And of course there’s Jamie Dimon, Warren Buffett. They’ve all signed on to these kind of pledges to say the short termism, short term maximisation of share price, is not a good model. – Yeah. I think in practise, we’ve moved beyond. If profit maximisation is solely the sole purpose, I mean, I think, there’s a lot of academic discussion about it, but I think the reality is if you talk to business leaders, if you talk to people who have thought about the issue, I think they all will clearly say, yeah, a company should pursue other things in addition to profitability. There should be a little bit more of a holistic perspective on how we approach this. So the real rub is in how do you actually accomplish that, though? And that’s the rub. – And not to disagree, but I think, that is true, I think academically, intellectually, but there are also very practical realities. I mean, actual lawsuits where shareholders did sue successfully. Company executive directors, whatever companies who utilise what the shareholders believe to be their funds, their profits, for some inherent social purpose, right? And so, that is a reality, and so, even when you and I created this course, we’ve talked to people in the tech space, and in asking the question of who should be responsible for thinking about the utilisation of these technologies, should technologists, should inventors actually be thinking about the positive and negative implications that technology, I think the interesting, and hopefully surprising, response is, then, that they’re not really thinking about that, right? – Yeah. – This is not part of most engineering courses or engineering programmes. They typically aren’t required ethics programmes. I know Stanford has recently created a computer science ethics programme that I believe is now required. – So they’re trying to embed that into the curriculum. I think those are interesting points. I think on the lawsuits, there’s obviously a lot of litigation, and usually, for companies, usually some sort of shareholder dissent, and it’s about like we could have made more money if you had done this, but it’s usually not the pursuit of profit that was the issue in those cases. It’s usually there’s some violation of a certain duty. And again, there are certain situations where companies, company directors in particular, are legally obligated to try to find the highest price possible, but that’s actually less common than most people think to be honest. The interesting point, I think, when people make new products, like new code, some new technology, the question about what would the societal or ethical or whatever public policy impact of this be, doesn’t usually come until after the fact. So the analogy that I think about is once you have a shiny new sports car, it’s in the garage, and you think about, can I actually drive this? Would it be safe if I drove this? Those kind of questions are actually really important. The reality is you’re just gonna drive it, because the temptation is just so high to say, hey, we just gotta get this out there. This is so great. And if you think about in the context– – Drones would be one of these technologies that, in this course we talked about where people just like, just started using them. – Yeah, and so if we bring that full circle, if a company has invested so much money, capital, financial capital, human capital, time capital, into producing their shiny equivalent of a shiny sports car, there’s gonna be a lot of pressure to roll it out, despite, internally, there may be some enlightened folks who think, hey, we need to think about this before we do it, right? And that’s where the real tension comes in, because I think everybody at that table will say, oh, this is an important thing to discuss, but let’s just go out and either make money or whatever rationale they want to do, use to roll out the product, and this is the real tension, because if you imagine hundreds upon thousands of companies who are implementing these new technologies doing this across the world. You could end up in a really not so great place, right? Additional Readings Fintech For Financial Inclusion: A Framework For Digital Financial Transformation. (2018). Alliance for Financial Inclusion . Retrieved from https://www.afi-global.org/publications/2844/FinTech-for-Financial-Inclusion-A-Framework-for-Digital-Financial-Transformation Galen, D., Brand, N., Boucherle, L., Davis, R., Do, N., El-Baz, B., Kumura, I., Wharton, K., & Lee, J. (2018). Blockchain for Social Impact: Moving Beyond the Hype . Retrieved from https://www.gsb.stanford.edu/sites/gsb/files/publication-pdf/study-blockchain-impact-moving-beyond-hype.pdf Bughin, J., Hazan, E., Allas, T., Hjartar, K., Manyika, J., Sjatil, P. E., & Shigina, I. (2019). Tech For Good: Smoothing Disruption, Improving Well-being. McKinsey Global Institute . Retrieved from https://www.mckinsey.com/~/media/mckinsey/featured%20insights/future%20of%20organizations/tech%20for%20good%20using%20technology%20to%20smooth%20disruption%20and%20improve%20well%20being/tech-for-good-mgi-discussion-paper.ashx 6.1.3 Should FinTech Create “Good” for Society?
– So quick history lesson, maybe legal lesson for those who are out there, cause this underlying fundamental question is really important in the kind of technology space. So if you go back to late 70’s, early 80’s, there was this new technology called Betamax right? This is famous famous U.S Supreme Court case. With Betamax people all of a sudden had the ability to record television shows in their own home right? So the question at the time was, the people like Sony and the content creators that were fighting against this technology, they said the only valid use of this technology was violation of copyright right? You could only use this for a bad purpose so therefore the Betamax technology itself should be illegal. Now conversely those that created the Betamax they said “No, there’s a lot of different uses that you can have for this technology.” Just because there’s one possible bad use doesn’t mean the entire use is illegal and in fact, they made the distinction that it’s the person and the way they use that technology that determines whether or not it is ethical and or legal. So now fast forward to YouTube. YouTube has been sued many times about the content that’s on there and so the question is, is it YouTube’s fault that people are violating copyright laws or is it the people that are putting the content on YouTube that are violating these laws? Is it the technology or the user? And so these kind of battles have waged on in Europe, in China, in the U.S. The legal systems there have kind of, you know, analysed this question differently and they’ve come up with conclusions based on their own kind of cultural and legal background for that question. So I then throw that question- Now with that in mind, thinking about all these new technologies, we’ve talked about driverless vehicles, we’ve talked about facial recognition software, we’ve talk about all these cool new technologies, it leads us to this last and underlying question. Who should be responsible for these technologies? Assuming technology in and of itself can’t be inherently bad. Let’s assume it’s neutral, let’s assume it’s amoral. Who’s responsibility is it to ensure that these technologies are being used in proper ways? Is it the creator and owner of that technology, like the creator of the Betamax? Or is it the people – are they the people that are actually using utilising that technology say like Silk Road and other people? – Yeah, I mean I think, that’s obviously a super difficult question and frankly we’re not gonna be able to parse that answer exactly here. – And just to cut, one of the reasons why this is such an interesting question now is because unlike the Betamax, which the only thing you could do is record like on a single location in your living room, lets say, the difference in the technology now is I mean sentient AI or those types of things. Once that is out there, once that shiny new car is on the road, you can’t pull that back and has the ability to scale theoretically exponentially. – Ultimately the short answer is, you know, stakeholders. If you think about what that encompasses, you know, that’s users, that’s the creators, it’s the companies that commercialise it and it’s government and regulators. I think the pace of change that you’re eluded to is gonna be so rapid that we can’t rely on law and regulation alone. – Yeah, definitely not. – And by default, because of that, a lot of the structures that we might want to put in place almost would just be placeholders, because as we kind of continue through this iteration of new technologies and they build on each other, if we just say “Well it’s up to the governments, it’s up to having new laws, it’s up to creating a different regulatory structure.” Those will go a certain way, but ultimately, you know, I think there’s other stakeholders that have to be involved, that have to have greater responsibility in the context of that and you can create different incentive structures I think for people to start thinking about it. I think you can create from a base level of basic education, you know, as we were talking about just a few minutes ago about as people are developing new technologies, should we have or should companies develop a kind of the internal culture of saying “Okay, what is the impact of this?” Because you know there is a lot of fairly well known technology companies including Google that have various mantras relating to helping human society. But in the end if they don’t think about those core kind of cultural questions at the beginning of developing new projects, then it just kind of potentially mutates into something that they did not originally intend but it becomes what it is. – Yeah, so again, going back to the very beginning of this course, remember different types of culture change at different speeds. This is what we call culture lag and technology in particular is changing incredibly quickly and other aspects of society and our cultures like say law, religion etc. Are not going to be changing nearly as quickly. So what that means is, as David said, we’re not going to be able to rely solely on legislation, solely on the government in order to regulate and kind of police these things and that’s why it’s so much more important for us to think about the moral implications of these technologies and for us to rely on other stakeholders including the consumer, you and I, and the innovators that are actually bringing these changes to the marketplace. So for the remainder of this module we’re gonna focus primarily on these innovators, focus on these different business models, people that are choosing right at the outset to build business models that not only hopefully will have profit and sustainability, but will also do some type of social good, social benefit within the context of their actual service and product. Additional Readings Porter, E. (2013). Copyright Ruling Rings With Echo of Betamax. The New York Times . Retrieved from https://www.nytimes.com/2013/03/27/business/in-a-copyright-ruling-the-lingering-legacy-of-the-betamax.html Should The Tech Giants Be Liable For Content? (2018). The Economist . Retrieved from https://www.economist.com/leaders/2018/09/08/should-the-tech-giants-be-liable-for-content (paywall) Yurieff, K. (2019). Youtube Says It Will Crack Down On Recommending Conspiracy Videos. CNN . Retrieved from https://edition.cnn.com/2019/01/25/tech/youtube-conspiracy-video-recommendations/index.html Bergen, M., & Shaw, L. (2019). To Answer Critics, YouTube Tries a New Metric: Responsibility. Bloomberg . Retrieved from https://www.bloomberg.com/news/articles/2019-04-11/to-answer-critics-youtube-tries-a-new-metric-responsibility Eadicicco, L. (2019). Apple Ceo Tim Cook Called Out Companies Like Facebook, Theranos, And Youtube In A Speech Pushing For Responsibility In Silicon Valley. Business Insider . Retrieved from https://www.businessinsider.com/tim-cook-stanford-commencement-speech-2019-6 Stanford University Commencement 2019. Stanford . Retrieved from https://www.youtube.com/watch?v=OQ6bRYJAr4o&feature=youtu.be (video) Helberger, N., Pierson, J., & Poell, T. (2017). Governing Online Platforms: From Contested To Cooperative Responsibility. The Information Society , 34(1), 1-14. Retrieved from https://doi.org/10.1080/01972243.2017.1391913 Haselton, T. (2019). Google CEO: YouTube is too big to fix completely. CNBC . Retrieved from https://www.cnbc.com/2019/06/17/google-ceo-sundar-pichai-youtube-is-too-big-to-fix.html 6.1.4 Does “Legal” Mean “Ethical” and “Valuable”?
– One of the underlying questions that we like to explore in class is around this idea of if something is legal does it always make it right to do it or does it make it ethical to do that? So keeping that in mind we wanna think about algorithmic trading or electronic trading and its utility. – I’ll come straight out and say I am not a finance guy I’m certainly not a technologist and so I don’t know a lot about this. But I was speaking to a friend of mine recently. He’s a programmer, but he’s in finance very common here in Hong Kong and he was talking about high-frequency trading algorithmic trading, right? These types of hot, new FinTech tools that have been utilised really around the world but especially in a place like Hong Kong. And so as I was learning more about what he was doing and specifically how the technology worked I had a very, I guess I thought, simple question for him. And my question was do you think what you do adds value to society? Right? And his response was immediate and he said, “No.” “If you mean in terms of social value, no.” Right? To him it was just a job, and it was a way to make money. And so there’s not necessarily inherently anything wrong with that. But I think this is an interesting question. If we use this as an example in the finance space in the technology space just because algorithmic or high-frequency trading is legal does that mean that we should value it? And if so does it mean that we should even allow it in society? – Yeah, that’s a really interesting question in a lot of ways. On one hand what’s happened perhaps like the last 10 years or so roughly the last decade is a move for particular a lot of what’s considered vanilla trading. So like things that are traded that are pretty standard like just stocks, normal stocks and bonds, nothing fancy. A lot of that has been pushed into these areas of electronic trading, high-frequency trading algorithmic trading. And basically, depending on the strategy it’s just being done by computers and machines effectively. And we find, generally speaking that that is more efficient. Right? And so on that hand you can think okay, well is that better? Well in some respects that may be better for a client, right? You get better execution, it’s probably faster it’s less biased. But in the overall grand scheme of things. Like, so your buddy that’s creating this is he adding societal value? You know I can see where that jump is really difficult. I think if you take a really big step back and think of it from a macro perspective there could be some societal value in someone creating such programmes. I mean, for example most large companies as well as countries in the world have underfunded pensions for example. And basically they have obligations they need to pay out at a certain point that they won’t be able to because there’s a shortfall. And as a result what a lot of these places end up doing is giving money to other, basically investors to manage for them to hopefully make enough money where they can pay out money to you and me when we retire. One of the benefits perhaps would be through more efficient trading through electronic trading or algorithmic trading that maybe these people end up getting slightly better returns which ends up maybe serving being able to help retirees get their full pension in 50 years. That’s really big picture though right? And that’s kinda multi-step removed from what your buddy is doing. And so in the context of that it’s a really difficult thing to think about, right? On the flip side of that, this is not uncommon for a lot of people to think about in general either, right? So if we think about the person who works at a fast food restaurant maybe serving unhealthy food. When you think about like, is that adding societal value? – You gotta eat. – You don’t have to have an algorithmic trade. – Well that’s right. But do you need to eat there rather, right? – So this is like… – Think it through that. It just becomes quite interesting. – This is at the heart of the question and I think this is really what we’re asking for you to think about out there. Okay, so you may have differing degrees of restaurants and there may be differing degree in terms of the health of the food but at the end of the day you have to eat. Right? So therefore at least you’re arguing within a specific confined space of something that is a core need and therefore we as society would value that enough to allow incorporation of that particular type of business enterprise. The question that I’m asking is and I think what we’re trying to get at the heart of especially from the finance standpoint is does finance matter? And if not, if there’s a part of this industry like algorithmic trading that doesn’t have some inherent value to society should that be something that we as society allow and is that something that we should ascribe value to? So again, the question that we asked was if just because it’s legal that doesn’t mean it’s ethical. Right? – Or even necessary. – Or even necessary. And so, I have this with my students all the time they’ll say, but Mr. Bishop, it’s not illegal, right? So who cares? Mr. Bishop it’s not illegal so therefore it must be okay. – Which are not the same thing. – They’re not the same thing. And I think it’s a very dangerous kind of mindset that you do see quite prevalently in the finance space that does have me question whether or not we as society are valuing finance correctly, accurately and whether or not from a regulatory standpoint or from an academic, kinda education standpoint whether or not we’re setting them up to view this, their role and responsibility to society accurately and effectively. Right? Additional Readings 6.1.5 How Can FinTech Create Value for Society?
– The point that I’ll leave with is, as we’ve discussed throughout this entire course, finance is critical, critical for the proper functioning of society. As we know it. – Absolutely. And places where people are free, where they’re educated, where, you know, basically the places you’d wanna live, you’d wanna travel to and you’d wanna go, those are places that have a highly functioning financial sector– – System. – And so I think it’s imperative for us to realise that it’s just critical to do this right, to do it well, to see this not just be a money-making exercise, but to actually have some social purpose to what they’re doing, right? The history of banks, it’s super important, right. It’s about trust, it’s about safety, it’s about stability in society. And I’m just wondering whether or not we have gone so far with some of like, algorithmic trading that it’s beyond– – Well, and, you know, that to be frank that’s only going to be the tip of the iceberg. – Right, exactly. – And as– – Meaning algorithmic trading as it is today, is only the tip of the iceberg. – Yeah, I mean in terms of where technology interfaces with traditional financial services, I mean, it’s already, the change is already starting to happen. – Yeah, yeah. – But, you know, algorithmic trading, high-frequency trading is, you know, it’s something that seems sexy because, you know, it splashes on like, you know, different newspapers and magazines. It’s related to financial markets directly. – Yeah. – And so, people tend to think of it as, you know, something very interesting. But the general idea of what’s happening, the structural change that those kind of technologies are making to financial markets and how banks and financial institutions operate. That’s just gonna be, you know, an ongoing process that will actually continue and compound with speed over time as these new technologies permeate different functions that happen in banks. And so the questions that you’re raising are absolutely correct. And so on one hand, you know, and it raises a whole host of variety of different societal issues, right. So, right now the financial industry as a whole is a large employer of a lot of people globally. – Yeah. – How is the displacement of this labour gonna affect employment trades– – Yeah, we talk about truck divers, – That’s right. – the reality is a lot of people in finance themselves are gonna be displaced, right. – And so, you know, they’ll increasingly, if you look at the distribution of employment at large investment banks, large financial institutions, increasingly, the percentage of people in technology, – Yeah. – is much higher than it was you know, 10, 15, 20 years ago, and so, that kind of trend will continue which is indicative of that a lot of the roles that people are in increasingly are not necessarily finance, pure finance roles. – But more, kinda this hybrid role– – Certainly that customer facing– – that your friend is in. – Yeah. – And so that’s one huge side of issues people have to think about. I think, you know, to the more broader foundational point, that is, one of the core parts of our course, is like, this idea’s broader purpose. That’s a really interesting discussion to have in the context of financial institutions. – Yeah. – Because by default, they should be profit maximising, right? – By just the very nature of what they’re doing, and that yet, we wanna ask them to be more than that. – Yeah. – Which I think is fair. – Yeah. – Because I think the great role that they have of how they are basically the fuel in many respects of society. – Yeah. – And of mobility, of social mobility, both individually for people like yourself like you mentioned buying a home and being able to get a bank loan, or look at countries that we’ll look at like Kenya or Myanmar, that rely on larger financial, at the macro-level, financial institutions to be able to move up from a developing closer to a developed nation. That, the engine, the fuel for that engine is largely pushed by aspects of finance. – So I like that, so if you think of law as kinda the infrastructure for society, the foundation, – Yeah. – finance is largely the fuel. – Yeah. – And so if you, okay so, again, the main concepts we’re talking about in this course, trust, accountability, right, the idea that there’s something of higher value, these are the things that we’re gonna be focusing on within the entrepreneurs and enterprises that are utilising FinTech, hopefully for some broader social purpose. It’s important to understand that, you know, there’s no way to know whether or not these companies we’re gonna use as examples, are gonna be doing things the right way or wrong way, if there even is such a thing. They’re definitely gonna stumble along the way, but the point is we want to look at ways that you can kinda build purpose into your business model, so that hopefully we can all kinda move towards a better, more inclusive society. – And to kinda caps to all that, like our discussion about Utopia in society, you know, and we said there’s the structural part of it, and there’s the other stakeholders, in the government and laws, but ultimately this gonna be at the people. – Yeah. – And so I think to that point exactly, the founders, the people who are leading these, who are starting these organisations, hopefully, in addition to what they’re trying to do to make the business profitable, they also kinda listen to the call of trying to be more purpose-driven. – Yeah. – And I think the combination of that will help us get to a place where, in an ideal world hopefully, or closer to an ideal world, – Yeah, yeah. – of this utopia we’ve been talking about. – Exactly. Additional Readings 6.2.1 Blockchain Application – BitSpark
Earlier in the course we talked about global remittances, which is the money sent home by migrant workers who leave their homes to find work. We already talked about how important remittances are to migrant workers here in Hong Kong, but equally so for the 150 million migrant workers around the world. And this is particularly important for the many migrant workers and their family members who are unbanked. These transactions are an important lifeline for families around the world. They provide money for essential expenses like food, education, health, and housing. And while the transaction amounts are often small, it is estimated that collective global remittances total over $600 billion annually. To put that into context, that almost four times more than the total amount of international development aid, which the World Bank reported was at around $162 billion in 2017. The reality is that for many developing countries, remittances are a necessary crutch propping up their economies. But migrant workers are often unbanked or underbanked, and because of the small individual dollar value per remittance transaction, many workers have to use less trustworthy companies to help them transfer money. As a result, many are taken advantage of via excessive fees, and the transfer process is often very inefficient and time consuming Seeing both a social need and an economic opportunity, one company that has sought to lower such fees and reduce inefficiencies is Bitspark, a startup established in 2014 and headquartered here in Hong Kong. The company claims to be the world’s first cash-in, cash-out blockchain remittance service. In basic terms, what that means is that they use cryptocurrency as a medium to transfer value directly between remittance shops. Like some of the other blockchain examples we have talked about earlier in the course, Bitspark theoretically helps cut out the middleman, leading to faster transactions and lower fees for migrant workers. The company now works with remittance shops all around the world, including HK, Malaysia, Ghana, Philippines, Nigeria, Vietnam, and Pakistan, enabling users to transfer money quickly without the need for a bank account. And other companies are following suit. Additional Readings 6.2.2 Mobile Payment – Wave Money
Southeast Asia is one of the fastest growing regions for FinTech. It’s obviously super diverse in terms of culture and people, but also levels of economic development. On one hand, you have one of the most developed, futuristic countries in the world with Singapore, but then a number of countries in the region where the populations are predominantly unbanked or underbanked. When I consider FinTech’s potential, one of the countries that comes to mind is Myanmar. I first travelled to Myanmar in 2010, about a year before it started its current economic liberalization. At that time, credit cards really didn’t work in the country and overseas ATM cards were not compatible with domestic cash machines since Myanmar was isolated from the global financial system. It was, and in many ways still is, a very cash dominant economy. At the time of my visit, visitors needed to bring in with them all the cash they needed for their trip, otherwise it was going to be very difficult to access cash from overseas sources. In light of that situation, one of the FinTech ventures working to create greater access, financial inclusion, and economic efficiency in Myanmar is a company called Wave Money, a mobile financial services provider that was launched in 2016. To see Wave Money’s attractiveness, we just need to look at a simple case, which is featured on Wave Money’s website. Imagine you’re a worker that sends 20,000 kyat, approximately 13 USD, every month to your parents in a rural village. Prior to Wave Money, you would need to send your parents money on the morning bus, and your parents would receive it once the bus arrived. This is obviously a tricky proposition. You or your parents can’t open a bank account because not only don’t you have enough for the minimum deposit, since you’ve never used a bank before, you are not sure how they operate or whether you can trust them. As an alternative, through Wave Money you can send a payment through your mobile device or a Wave Money agent, and your parents, even if they don’t have a mobile phone can receive the cash from a Wave Money agent in their village without any delay. In this example, we see lots of potential issues related to trust, time efficiency, security, accountability, and a number of other themes that we have covered during the course. We can also easily see the tremendous market need for such a solution, for the segment of the market that needs it the most. Based on that need, Wave Money has grown rapidly since it started and is now serving more than 7 million customers through a network that is significantly larger than traditional banks in Myanmar. Wave Money also launched an enhanced mobile app, WavePay, to provide more functions. Reportedly, Wave Money’s payment volume in 2018 was approximately 1.3 billion USD, which is roughly equivalent to almost 2% of Myanmar’s GDP. Wave Money’s story, and others like it, are sources of our optimism for the great opportunities that FinTech can unlock for many of the world’s neediest. The potential for profound change is exciting for us. Additional Readings 6.2.3 AI Application – Anti-Human Trafficking
In module 4, you may recall that we discussed the potential issues and misuse of facial recognition technology. And, when it comes to facial recognition technology, it does seem like that a majority of what is covered in the media focuses on the potential pitfalls or the potential negative consequences of this technology. We keep hearing fears and threats of a rising big brother surveillance society, about privacy now being a thing of the past. What we hear less about though are the positive uses of this technology. The fact is that artificial intelligence and facial recognition has some really cool and promising use cases that are both saving lives and alleviating a lot of harm. One brief example that we discussed earlier deals with immigration. For those who travel frequently, you know how annoying it can be to wait in long immigration lines. But as discussed, many countries are using biometrics, including facial recognition, to increase the accuracy of information processes and simultaneously save us all a bunch of time. Perhaps an even more impressive example of how this technology can change the world revolves around human trafficking and modern slavery. Every day around the world millions of people are trapped in modern-day slavery and sexual exploitation through human trafficking – and human trafficking has been characterized by the US Department of Defence and others as the world’s fastest growing crime. Now, we often think that trafficking is just something that happens elsewhere; outside our city, outside our country – or something simply taken out of a movie. But, according to the UN, it happens all around the world. Now, these human traffickers often use hotels to place their victims – where they can take advantage of the privacy and anonymity accessible through the hospitality industry. Inside those hotel rooms, victims are often photographed and the images uploaded and advertised on the both the open and the dark web. Fortunately, some companies, like Marinus Analytics, has set out to address parts of this vast and severe issue. Now, Marinus Analytics utilizes AI, facial recognition and machine learning to find useful patterns among vast amounts of data and turn that into actionable intelligence for investigators. Their suite of tools, called Traffic Jam, helps law enforcement follow up on leads more quickly, find and identify victims more effectively, and track down trafficking groups. And, their facial recognition technology (FaceSearch) can assess a photo of a victim and – within a few seconds – find out if this victim has been advertised online, and if so, find out where they may be. Another tool, called Hotels 50K, created what is essentially a ‘facial recognition system for hotel rooms’ by training a system to identify discernible markers among images of 50,000 hotels around the world. By doing so, the AI will enable investigators to find out where an image of a victim was taken. So that’s not based on the face of individual, but actually the whole entire room itself. So what do you think? Does the positive uses of facial recognition technology outweight any concerns you may have about privacy? Additional Readings 6.2.4 How Can FinTech Build Purpose Into Business?
– All right, so we’ve just provided a number of examples where companies are trying to infuse social impact and hopefully, ethics into their FinTech business models. Right? So, what are some of your thoughts on the utilisation of these things, the positive side of FinTech, and what do you see as, I guess, advantages going forward for impact driven businesses? – Yeah, so I think the first thing from my perspective is frequently we like to bucket these things, like, certain types of business versus for-profit or traditional types of businesses, and I think we’ve talked about this before, but you know, effectively, you know these are just businesses. – Yeah. – Right? And we want them to be successful as a business but, of course within their ethos, hopefully they’re infused with a sense of purpose, which I think some of the examples that we’ve talked about with Bitspark, or Wave and some of these other ones. We see that, right? Within, there was an opportunity, like a market opportunity but also, there was a good purpose there that would be beneficial. And I think, you know, ultimately as we move forward, and we think about the complexity of the problems that we’re trying to solve I think it’s quite clear that government alone is not the answer for the complex problems that we are facing as humanity. – Absolutely. – Right? So, we, in a very important and profound way, need a partnership between private sector actors and government and global citizens, collectively trying to work on these problems. And I think, these kind of some of these real entrepreneurial new businesses that come out with these great leaders are thinking about how to do both, I think, is one of the things that will help us get to where we want as a collective as humanity. – Yeah, and I think it’s especially true in places where perhaps the government sector is not as strong there isn’t really a social safety net, and as I often tell my students in many parts of the developing world what we call social entrepreneurship is just survival. Yeah, they’re not trying to solve a social problem. They just see a need and they are trying to fill that need. So I think that’s really, I think the hallmark of most successful entrepreneurship adventures is the idea that you’re not trying to create a solution for a problem that doesn’t exist. You’re just trying to fill a void. And so in the case of say Wave, in the case of Bitspark, you have this massive population of migrant workers, of people in developing countries, and really, potentially the future really even in developed world, where they just need to move money. Right? Or they need access to banking. They need to be able to save money. They have to be able to like transfer money back home to their families. They need to pay bills. – Yeah, and I think one of the, one of these new ventures that you’ve looked at in Africa, M-Pesa is kind of a prime example of this, right? – Absolutely. Yeah, yeah. So, M-Pesa primarily originally in Kenya, but now really expanded across Africa and really all over the world is provided mobile banking and payment transfers, not even using smartphones, right? These are just kind of the old school Nokia style phones and they’ve given people the opportunity to leave the countryside, maybe work in the more urban environment where there are more opportunities for jobs, but then still be able to provide for their family, their parents their children back home by sending them money. – Yeah, I think that will be one of the great trends of FinTech, so you know it’s not as glitzy necessarily, but this idea of providing things that maybe seem so elementary basic to us is going to be a real value add for a lot of these communities to increase labor utilization, increase time efficiency, a whole host of other things that will increase, hopefully, just kind of the quality of life for many of these folks. – Yeah, but then on the flip side, not only is it basic or elementary to us, the reality is developing economies are learning from this, right? So as we’ve said earlier some of these technologies are way ahead of what even you and I would have here in Hong Kong, which is, you know, the so-called financial centre, right? You and I don’t transfer money using our phone very often, right? And those tools are actually quite new here, where as M-Pesa has already been used for many, many years. And I like the idea so I tend to come across, and perhaps even in this course as someone who is scared of some of these technologies especially facial recognition, and kind of the Big Brother conspiracy theory stuff, but it is very exciting as someone who is in the human trafficking kind of anti-slavery space. It is very exciting to think that you could use something like facial recognition to identify victims of trafficking to identify people that are kind of forced into prostitution or labour, and then even notify authorities and have them kind of go in and help people. – Yeah Additional Readings 6.2.5 What Kind of FinTech Do We Want?
– So I think these new technologies will solve some really critical problems that we face, so like that stuff you were mentioning about human trafficking and things like that. One other problem that I think these technologies will help us start addressing in a more hopefully efficient way are things related to health– – Yeah. – either through devices, internet of things, type of things, through consolidation and storage and access of medical records and data and how that interplays with live medical information we can get from our bodies. So I think there’s a whole host of potential applications that can really be unlocked through some of these technologies through different applications of them. – Yeah so I’ve been doing work recently with a very large insurance company. And they have a number of different products across Asia, but including health insurance. So a lot of people don’t really think about health insurance as a financial product, but it is. – Sure. – any kind of insurance. – Yeah I’m sorry, not just health insurance but really like life insurance et cetera. Insurance is a really critical financial service that is not really, in the past, has not been accessible to huge amounts of the world. And again, you’ve got two-thirds of the world’s population here in Asia, a lot of them developing, just coming up. And many of them don’t have access to these forms of health insurance, life insurance. – I guess in a comprehensive sense. – Exactly. And so what you’re seeing now is, in the InsurTech space, they’re actually investing in things like the internet of things, like wearables and stuff, where if you get a health insurance or life insurance policy, you wear the device. So again, we’ve shown examples of how that’s a negative thing of being tracked. But the reality is, they can identify and go towards a preventative medicine structure, a business model, where they actually know. Are you taking enough steps? Are you active enough? Are you getting enough sleep? And these types of preventative measures can really bring down healthcare costs, so in the U.S.– – In the long run. – Yeah I mean, U.S. is paying over what, two and a half trillion, 2.6 trillion dollars, something like that, towards healthcare. It’s really just obscene. And so this can hopefully help to prevent these healthcare problems from occurring in the first place which not only reduces costs for governments, for companies, for individuals, for families, but more importantly, can help provide people a healthier, happier life. – And I think that’s important. I think that’s a great point. I think it’s important for at least two reasons. On one hand, unfortunately, there are a lot of countries as they develop, they kind of take on this U.S. style– – Yes. – Western style of diet. – What he’s saying is they become fat. – Well that’s not what I’m saying, but in terms of lack of activity, lack of regular exercise and those kind of things. So this is, as a preventive measure, this is important ’cause there will be increasing more populations that are dealing with very similar types of health issues. And then, one thing we always talk about, in this course– – Even mental health by the way. – And mental health as well. One thing we talk about between ourselves, in the context of this course even, is this idea of finding balance too. Because some of this medical data, in some sense, it’s a double-edged sword. Because it’s good for on the preventive side, but insurance companies are, like we said, financial companies. They are subject to certain pressures as well. So you can easily envision a situation where an insurance company would use that information and say, “Okay this person is probably gonna be a risk, so let’s not insure them.” And then, you start isolating people who actually need the health insurance the most probably but can’t access it. So it’ll be a very interesting, dynamic kind of landscape to see how this develops. – So again, great point. And it really ties into everything that we’re talking about. So again, you have to have a balance between, in this example, say like health insurance where we want to democratise it. We want to insure that everyone has access to affordable health insurance. But then, that also would require a regulatory framework and a legal structure that allows for access for everyone. It doesn’t cut people off for pre-existing conditions or for gender or pregnancy and all these other real life examples. So as we move forward as a society, as companies, as individuals, whatever, we have to think about the positive and negative ramifications of these things because there’s so many cool things happening, so many innovations. All we’re asking is that we just take a moment. We stop, we just ask. Okay we’ve got the how. Let’s ask the why. And let’s try to think about why we’re doing these things and the potential negative ramifications of them. And as we do that, I think we’ll find that everyone, and really society as a whole, is better off. Additional Readings Course Conclusion: An Inclusive Approach for the Collective Future (Caption will be displayed when you start playing the video.) 0:00 / 0:00Press UP to enter the speed menu then use the UP and DOWN arrow keys to navigate the different speeds, then press ENTER to change to the selected speed.Speed 1.0xClick on this button to mute or unmute this video or press UP or DOWN buttons to increase or decrease volume level.Maximum Volume.Video transcript St
So, let’s wrap up this module and tie all the major elements of the course together. Some experts out there argue that the technological innovations that are coming, and the pace of innovation, will unleash changes changes on humanity that we are not prepared for. Indeed, throughout this course we have explored a number of possible challenges and pitfalls that could come about because of certain FinTech advancements and business models. These challenges include bias, job displacement, a distancing of proximate and intimate human relationships, increased inequality, loss of privacy, cyber security threats, and more. Maybe after hearing us discuss all these different challenges and potential problems you may now be of the opinion that our prospects are bleak, and that there is nothing that we can do to stop the eventual arrival of a dark, dystopian future. But the reality is that both David and I are extremely optimistic about the future, and our hope for FinTech and FinTech innovations has a lot to do with that. As mentioned earlier in the course, the reality is that the world today is empirically better in almost every measurable category than it was even just 30 or 40 years ago. And from where we stand, the future looks to eat be even better. But it is also true that the challenges we face today, whether in the FinTech space or otherwise, are not the kind of problems that can be solved by the decisions or actions of just a small group of people. The problems are entirely too complex, think of climate change, the future of work and the potential challenge brought about by AI, blockchain, and other FinTech innovations. This requires an intelligent conversation among a broad swath of society. We are all impacted by these things, and therefore all need to educate ourselves, think deeply and broadly about the implications, and patiently and intelligently discuss what we want for our society. So we leave this course where we started: with the choice. Do we want a utopian future where restore scarcity is a thing of the past, where basic finance tools are accessible to all, and inequality is reduced, where bias is removed from financial decisions, and accountability is transparent and fair? If we want these things, FinTech is likely going to be a huge piece of the puzzle necessary to get us there. So looking back, what are the underlying ethics concepts that we all need to keep in mind with considering an ethical FinTech landscape? The first is trust. As we covered in Module 1, trust is the bedrock principle upon which the financial industry has been built for over a millennia. People need to trust that their financial institution will protect them and their money, will always hold their customers needs above their own selfish interests and is a reliable partner for all aspects of financial transactions Unfortunately, some financial institutions have breached customer trust which not only harms their own business, but also has broad implications for all society. So FinTech innovators including the large TechFins need to ensure that the tools they are building are trustworthy and safe and that their business models do not abuse customer relationships by selling data maintaining a lack of security protocols and other inappropriate practices that have unfortunately got some technology companies into trouble recently. As Klaus Schwab, founder of the World Economic Forum said In a world where nothing is constant anymore, trust becomes one of the most valuable attributes. Trust can only be earned and maintained if decision makers are embedded within a community, and taking decisions always in the common interest and not in pursuit of individual objectives. The second underlying ethics concept that we discussed several times relates to this idea of proximity. Now proximity is defined as a nearness in space-time or relationship, And in this course, we typically use it to denote how connected an action is to its outcome. So for example, in the Trolley Problem example from earlier the situation where we are asked to pull a lever to divert the trolley is less proximate than the situation were asked to directly push someone in front of the trolley. In the first example the mere fact that we were asked to pull a lever put some cognitive dissonance between the action of actually pulling the lever and the eventual outcome of someone getting hurt. But when we’re asked to push someone directly, the action of pushing and the outcome of the person getting hurt are physically and emotionally connected. You may not realize it but our brain reacts differently based on how proximate a situation is. The more proximate a situation is, the more we can emotionally connect to it, and therefore we’re more likely to choose to behave morally and ethically. In non proximate situations, we are more likely to behave unethically because we do not feel a connection to the harm This has many implications in FinTech ethics. For a simple example as we shared earlier, people tend to spend more when using credit cards rather than cash because credit cards seem more distant. But when they hold that cash in their hand we tend to connect it with the hard work that went into actually earning that money. Well from a proximity standpoint the reality is that FinTech is driving people further and further away from personal emotional connections. When we purchase something online, we don’t have an opportunity to look someone in the eye and speak to them. And while that may be more efficient and possibly cheaper, that distance is changing the way of how we interact in society. Now for a really clear example of this in a non FinTech space, just take a minute to read the comments from any news story or video. People on the Internet are mean! Much meaner than they are in real life Now why? Well in part because commenters don’t have to look the person in the eye and deal with the real emotional fallout of hurting their feelings. Is there a way that FinTech innovators can increase efficiency and reduce cost while also increasing our emotional connection to the broader community? Should this even be a goal? Well, all we know is that a less proximate world would almost certainly lead to a relatively bleak outlook for humanity. So we hope that connectivity and a sense of community are not sacrificed as we seek enhanced efficiency. Now the third underlying ethics concept that we discussed throughout the course is accountability. In a FinTech dominated world, who will be accountable for decisions and actions that machines and algorithms make? It’s pretty clear at this point that legal systems around the world are not well prepared for ascribing responsibility for things like injuries from autonomous driving vehicles, potential bias and decisions relating to financial products and mortgages, and breaches of privacy and other utilizations of FinTech. When you remove humans from the equation, sometimes you make a process fairer and less biased or at least less reliant on our flawed decision-making processes and humans subjectivity But it also means that you no longer have a person to point to or potentially blame when something does go wrong. Are we ready for that type of future? Now our hope is that a comprehensive approach is taken by innovators, regulators, corporate players, and even among us, the consumers to ensure that all stakeholders’ voices are heard and the finance industry continues to be more transparent and representative rather than more closed off. The fourth underlying ethics concept was privacy. We’ve already talked about this a lot, and this is one of the ethics concepts that many of you hear about on a regular basis. But that doesn’t reduce the importance of this issue and the need for our society to continue to consider the importance of privacy in our lives. Indeed this will be one of the great debates of our generation. Now study was done observing how ethical principles were valued among different generations and the researchers discovered that in almost every single category the current millennial generation valued the same ethical concepts as more mature generations. The one category that showed a significant difference was privacy. The reality is that young people today are growing up in a world without privacy at least privacy in the sense of what it meant when we were younger. And this is having a significant impact on how people view or value privacy in their lives And who knows? Maybe ultimately that’s a good thing. Our point is merely that we have to consider the ultimate long term ramifications of these changes while we still can. And financial institutions and technology innovators need to really ensure that they maintain client trust by building business models that value customer privacy rather than simply profiting off of our personal information Finally the fifth underlying concept was cultural lag. If you recall, cultural lag is the idea that some aspects of culture, particularly technology, change and adapt much faster than other aspects of culture like religion and law, and many of the most significant social problems that exist are caused and perpetuated because of this gap. We gave several examples of this including how smartphones were adopted almost overnight, but it took nearly a decade for people to realize the negative impacts that smartphones were having on their lives. It’s critical for FinTech innovators to really consider the implications of how their technologies will impact people not only in the present but also into the future For example, one of the biggest challenges that will be faced over the next century will be ensuring that the millions of people whose jobs are eliminated by FinTech innovations, the drivers, cashiers, and yes, even some finance professionals, lawyers, accountants and even doctors, are able to be retrained and reintegrated into the workforce. This means that we also need to reconsider the way that we are educating ourselves and our children. As one commentator pointed out, our current education system is preparing our children for the future of work incorrectly. He stated that we are scared that human jobs are going to be replaced by robots, but we’re still teaching kids to think like machines. We hope that you have enjoyed this journey together, and that we have given you something to think about to consider and ponder about the present as well as the future. We are really excited about the future and truly believe that FinTech has the potential for significant global impact. But for that to happen, together we need to forge a future that works for everyone and we will do that by putting people first. Again, as Klaus Schwab said: “FinTech tools our tools made by people for people”. Once again, we stand at a precipice, but we do so together. So together let’s take the steps necessary to take the utopian society that we see in the movies and make it a reality. ~End of Course~ ~Thank you all!~ Additional Readings Baron, E. (2019). We’re Not Prepared For The Promise Of Artificial Intelligence, Experts Warn. The Mercury News . Retrieved from https://www.mercurynews.com/2019/03/18/were-not-prepared-for-the-promise-of-artificial-intelligence-experts-warn/ Schwab, K. (2016). The Fourth Industrial Revolution. New York: Crown Business. Bishop, D. L., Lee, D., Ferrel, O. C., Fraedrich, J., & Ferrell, L. (2019). Business Ethics: Ethical Decision Making and Cases , An Asia Edition , 1st Edition. Cengage Learning Asia. https://www.cengageasia.com/TitleDetails/isbn/9789814780803 Course Roundup
– What a fascinating learning experience it’s been for us not just these past six weeks but really for most of the last 12 months. Most of you don’t know this but we’ve been preparing for this course in some way since last summer. So, as we finish the last module, it’s truly bittersweet for us. – Along the way we’ve been guided by the wonderful Technology-Enriched Learning Initiative team, TELI, here at the University of Hong Kong, captained by Andrea Qi. The TELI team has been instrumental in filming and editing the content we’re working on, crafting wonderful animations, and just overall helping us appear much more professional and polished than we actually are in real life. – Yeah. And we really also wanna express our gratitude to our super excellent research assistant John Pedersen from Norway, our Norwegian friend, who’s really helped in so many ways and contributed in this entire learning journey and this entire process. John’s work has been invaluable and he’s supported us in so many ways for the past year. – Now, most importantly, we want to thank each of you for participating in this learning journey. We hope you continue the journey of exploring these important issues even after the formal end of the course. To contribute to that process we’ll periodically upload or publish new content throughout the year, so look forward to engage in further then. – Yeah, and now, with all that said, let’s jump into some of the interesting questions and comments that you’ve had as this final roundup for the last module that explores kind of the positive impact of FinTech. All right so, this week we had a lot of interesting comments about kind of the good and bad. So, the proper utilisation of technology and kind of the purpose of the company. So, we talked about whether technology can be good or bad or whether it’s always neutral. We talked about whether as a technologist or innovator they should be thinking about the negative potential ramifications or utilisation of the technology. And we also talked about the overall purpose of a company, whether they do have a social purpose or whether it’s just the traditional shareholder model where it’s to make money– – Make as much money. – Yeah, make as much money as they can. And I thought it was interesting because recently Tim Cook, the CEO of Apple, he actually gave the commencement address at Stanford’s graduation. And so he said, quote, “Whether you like it or not, “what you build and what you create define who you are. “It feels a bit crazy that anyone should have to say this “but if you build a chaos factory, “you can’t dodge responsibility for the chaos.” End quote. And so, Tim, he didn’t say the name specifically but he strongly inferred Facebook, for example, with Cambridge Analytica, he kind of mentioned without saying it Theranos and their blog diagnostics scandal. So, what do you think of this quote? I thought it was interesting ’cause it’s coming from Apple, one of the most valuable companies in the world, a huge name. Obviously right there in Cupertino, California, near Stanford campus. What do you think? – So, I think his approach, the way described I guess this philosophy that you basically have to be responsible for what you make, I think for a lot of entrepreneurs and innovators in Silicon Valley I think that is somewhat of a fresh approach. I’m certain there are people who think that. Certainly. – And when they’re young I think a lot of them want to believe that. – That’s right. And I’ve heard enough talks and met enough entrepreneurs to know that absolutely most of them have their heart in the right place, but how does that evolve? To me I think about it in two ways. One is once you start taking external money there is pressure– – VCs and stuff. – Yeah. There is pressure to basically have some sort of liquidity event, kind of monetize that investment even though it may be a few years down the road. So, that’s one that creates pressure. But two, I think there’s a general just mindset amongst many entrepreneurs that, hey, my responsibility is just to create something cool. If there are negative ramifications of that, that’s the responsibility of regulators and law makers and government to kind of tell us what’s okay and what’s not okay. – And the user, yeah. – And the user. But ultimately I think, and I’ve heard a few people talk about this, ultimately it does come down from a sort of innovator’s perspective where they’re like, well, you know, my responsibility is to create and the government’s responsibility is to regulate and we’ll meet somewhere in the middle or at some point. And I think what Tim Cook is saying in the speech which I thought was really interesting is this idea that maybe that actually shouldn’t be the mindset. Maybe there is some culpability maybe not legal but at least certainly emotional, moral, about the thing that you’re creating the impact it has on the world. And I think that raises a lot of other broader implications about other industries as well. And so, it’ll be quite, I’m curious to see if people will still continue talking about this and how he’s framed it a year, two, three, five years from now, or if it’ll be something that sounded nice in a graduation speech but people tend to forget in history. – Yeah, I really liked the way that he phrases the chaos factory. Personally I think that was a direct stab at Facebook. Because if you could look at the last five years of Facebook, it has created chaos in a whole host of ways, and I think that was kind of interesting. Okay, so, let me play devil’s advocate, that’s what I get paid for, that’s what I’m trained to do. Many of you don’t realise this but my law firm represented Apple and so I used to work directly with Apple’s legal counsel in Asia, and I think it’s very fair to point out that Apple has had a number of very significant scandals and problems attached to it. And, again, I’m not saying that this disassociates what he was saying or that it somehow negates what he was saying but I do find it a little bit interesting that Apple, which 10 years ago was the technology company that everyone was kind of throwing stones at, saying that the way that you’re driving your supply chain and kind of putting pressure on your suppliers is in some instances causing people to commit suicide by jumping off roofs and stuff, that was a very negative thing. They felt they weren’t on the human side things, they were pushing their suppliers to not treat the makers of their hardware very well. But I do think that Tim Cook has kind of steered the ship in a different direction, is kind of pushing for hopefully a more ethical company and a more ethical supply chain. At one point in his speech it directly relates to our course because it almost started turning into what I thought was a commercial or an advertisement for Apple because he really started doubling down on privacy. We’ve already talked about it in this course how Apple is actually kind of using this privacy mantra as a business model to a certain extent. They’re saying, “Hey, get on our platform, use our products, “because we are actually selling in part privacy.” If you’re on Android, if you’re on these other systems, then you’re potentially gonna be exposed. And even in the case with the San Bernardino shootings, they kinda use that as an example to say, “We not even gonna buckle to the pressure “from the federal government in the US, the FBI, “and we’re gonna make sure that your data is private.” I found that very interesting and he kind of talked about that a lot. Again, what is your take on I guess Apple’s position as one of the most valuable companies in the world, as kind of this really broad, almost cult-like following, what role can they play in steering other technology firms into what is perceived to be a better future? – Yeah, so, I think on the idea of privacy, to be honest, be it Apple, be it any other technology company out there, I don’t think there’s anywhere else they can go when it comes to privacy. I don’t think you can argue the other side of the coin from that perspective, at this point. – Give your data to everyone! – Yeah, right. It’s a very different, again, if we were in a different technological ecosystem like maybe in China this would be a different discussion perhaps. But I think given the political reality of the United States, given the kind of core values of a lot of users of certain products in the United States, I don’t think realistically a technology CEO can go the other way on privacy. – Certainly not publicly. – Not publicly. So, at some respect for Tim Cook’s speech it was great but it was to be expected in some aspect. – That’s fair. – Now, I think the broader question which I think is interesting is this idea of, can Apple or a similar technology kind of guide that discussion broadly? – Or even should they? – Yeah, I believe so. I think that ties back into the earlier quote about should you be responsible partly for the things that you create. And if we take that to its logical conclusion then certainly they should also be responsible and active in shaping norms. In effect, that’s what we’re talking about. Because I think at least both of us agree that law will never alone be enough, so it’s users who have to create norms or creators who have to kind of limit themselves in a certain way when it comes to these kinds of things. Until that kind of mindset is more pervasive then we’ll continue to have broader issues. So, I think it’s important in that sense for someone like Tim Cook to try to be a standard bearer when it comes to these kinds of issues. – Yeah, and I think it’s interesting. So, a lot of you over the course or actually several modules, when we talked about cultural lag and when we talked about kind of the negative ramifications, unintended consequences of the technology wave over the past 10 years, many of you pointed out for example smartphone addiction and how people were actually looking at their social network to the exclusion of their broader immediate network. And I think it’s interesting using Apple as an example, they have somewhat publicly identified their responsibility in that smartphone environment. They were the initial who once put out the smartphone in 2007 and certainly one of the most popular brands out there. And they’re now I guess cautioning people and even putting tools in place to help– – Screen time. – Yeah, screen time, kind of back off the use of that. So, you could say that’s a direct application of kind of the moral values that they claim to prescribe that are being put into their business model and even onto their hardware platforms. Well, Tim Cook he also mentioned, he said, quote, “Technology doesn’t change who we are, it magnifies it, “the good and the bad.” End quote. And so I think this is another example that I think as we move forward the whole reason why we entertain this in the first place, again, we’re not from the finance sector, we’re not from the technology sector, we are business ethics teachers, we are lawyers at heart, we just care about the future of society and we care about the way these trends are gonna be shaping our communities going forward. And so I think this is really a concept that we wanna leave you with, it’s this idea that technology isn’t gonna make you good or bad but it certainly can magnify the good and bad aspects of humanity. And so social media can definitely bring us together, FinTech can make life more efficient, but it can also create opportunities for exclusion, it kind of creates a distance, lack of proximity, which means that there’s a psychological I guess predisposition to be meaner on the Internet. There’s cyber-bullying. There’s so many aspects of society now that are driving people towards mental health issues and depression and distance and loneliness and sadness that I think can definitely be alleviated or magnified using these technologies. So, as we kind of move forward I guess our call to arms for all those that are out there is to keep this conversation going and really think about how these things not only can impact society but how you choose to use them to impact society. – Thank you again for participating with us during the course. It’s been a great experience for both of us. Now, although we’ve come to the formal end of the course, this is definitely not the end of our learning journey. The course will roll over to what’s called a self-paced format starting from June 26th and will continue that way until May 14th of next year, 2020. Now, we’ll continue to share updates of recent fintech innovations as well as laws and regulations and other things that we feel might be relevant as part of our overall course learning journey. – Yeah, so please stay in touch with us through email, newsletter, and edX, on our social media channels, as well as personal LinkedIn accounts. We really look forward to maintaining and expanding this learning community and continuing the discussion of these important question. We wish you the very best in all that you’re doing and aspire to do. ~Thank you~ Read More
Ads
Heya i am for the first time here. I came across this board and I find It really useful
“Thanks designed for sharing such a pleasant thought,
paragraph is fastidious, thats why i have read it entirely”
Like!! I blog quite often and I genuinely thank you for your information. The article has truly peaked my interest.
It’s difficult to find educated people about this topic, however, you seem like
you know what you’re talking about! Thanks