This course is from edX, scroll down & click “Read More” for more informations.
About this course
FinTech has started a global revolution in the financial services industry, and the transformation will only increase in coming years. There are many ways in which FinTech can improve the lives of people around the world; however, those same technologies can also be used to enslave, coerce, track, and control people. Accordingly, it is appropriate and necessary to consider the implications of the introduction of these technologies so that they are utilized properly, regulated sufficiently, and their adoption does not come at the expense of societal growth.
This 6-week online coursecovers 6 modules, representing the full spectrum of finance, technology, and the introduction of FinTech solutions globally. We will ask questions that are not often asked or addressed when new technologies are adopted. Why should we adopt FinTech solutions, and what are the best ways to introduce disruptive technologies? How does blockchain technology change the way we provide financial services, and how should blockchain technology be governed? Is FinTech creating risks in cybersecurity and how can technology help us prevent financial crimes? As Artificial Intelligence (AI) is developed and adopted, will human biases and prejudices be built into such mechanisms? And at a larger scope, should FinTech lead to a decentralized, democratized system of finance, or will existing institutions adopt FinTech strategies to cement their existing hold on the financial markets?
Through discussing and attempting to answer these questions, you will understand better how the introduction of these technologies can benefit or harm society. And through considering the proper application or introduction of such technologies, you will learn to make better decisions as an individual and organization when facing the question: is FinTech our savior or a villain?
What you’ll learn
Understand the ethical elements of finance, emerging technologies, and FinTech. Identify trends and opportunities that will shape the future of FinTech. Critically examine implications of Artificial Intelligence (AI), blockchain and cryptocurrencies (including ICOs). Understand how Regulatory Technology (RegTech) enhances supervision and reduces compliance-related costs. Understand how payment solutions are evolving and the potential ethical implications. Understand how alternative financing, including crowdfunding and P2P lending, are impacting markets. Analyze positive and negative aspects of the introduction and expansion of FinTech. Syllabus
Introduction: Ethics of Finance and Emerging Technologies This module will provide a historical and broad perspective of ethical issues relating to finance and the introduction or adoption of emerging technologies.
Blockchain and its Governance This module will expand off the Introduction to FinTech course (https://www.edx.org/course/introduction-to-fintech), to consider the most relevant and ethical ways such technology should be implemented, in a number of different industries or product segmentation. In particular, data collection, customer privacy, and transactional issues will be covered in this module.
Cybersecurity & Crimes FinTech can make it easier and cheaper for banks to monitor and control financial transactions, thus reducing fraud and reducing bank costs. But at the same time, these tools can be used to steal money and other corporate secrets, hide illegality (including purchases of weapons, drugs, etc.), and finance terrorists and other criminal organizations. Accordingly, this module will consider the implications of such important issues.
AI & FinTech In this module we will consider the implications of building our own concepts of “human” morality into amoral machines, as well as a consideration of whether human biases and prejudices can or will be built into such mechanisms, whether purposefully or unintentionally.
Institutionalization vs. Decentralization One of the key reasons people are calling for FinTech is for its decentralized nature, thus democratizing finance, and allowing regular people to participate more fully and affordably in financial transactions through technologies like cryptocurrencies, non-government issued IDs, and P2P lending. In this module we will address some large questions, considering whether FinTech should lead to a decentralized, democratized system of finance. Or whether existing institutions will adopt FinTech strategies to cement their existing hold on the financial markets.
Big Questions Relating to the Introduction of FinTech In this final module, we will consider some of the many outstanding questions and purposes of introducing FinTech to the world, exploring the many ways that FinTech can both help and hurt society. We will discuss financial inclusion, sustainable development, and many other positive aspects of FinTech development. Conversely, we will also consider how these same technologies and solutions could potentially be used to inhibit access to financial markets, or worse.
Welcome and Course Administration Welcome to FinTech Ethics and Risks
Hey, listen. We have a decision to make. Humanity stands on the edge of a massive shift in technology and productivity that is going to fundamentally alter our lives. Blockchain, big data, artificial intelligence, these buzzword technologies are rapidly changing our world, just like the steam engine that started the Industrial Revolution. Over the past century, new technologies have changed how we work and even how we define work. During that time, the average number of hours worked has steadily declined in the developed world, but lifestyles have generally improved. And the technologies on the horizon look to completely alter society as we know it. So here’s the cool part. If we get the next 10 years right, humankind could be well on its way to reaching the type of Utopian existence characterised in many stories about the future. This is especially true in the area of financial technology. And while maybe not as sexy or cool as driverless cars, advancements in FinTech will make it easier to send, receive and invest money. These are at the core of business and commerce, and FinTech stands to alter these interactions completely. But before we dive headfirst into this brave, new world, it’s critical to ask a few questions. Like, why? Why is blockchain technology necessary? Is faster, cheaper, smarter always better when it comes to data? What unintended consequences will arise from introducing artificial intelligence into everyday life? You see, unlike the steam engine, the magic of these new technologies is that they can scale faster than ever and quickly engulf the entire world. And while they may have the power to unite and transform, they can also be used to bind and control. Advancements in technology over the next decade will certainly lead to massive job loss and many fear new forms of slavery, surveillance and crime. Now is the time for us to consider what we want and what we will allow. We can’t wait until these new technologies are fully developed. Once we push play, we can’t just rewind. If we don’t talk about it now, it will be too late. This course is a chance for us to consider these questions together. Through it we hope to explore the implications for us individually and collectively. We live in a world where distance is relative and resources are growing scarcer, where local problems now have global implications. Humanity may stand on the edge, but we stand on it together. So join us as we consider these tough questions and help shape our collective future. Course Outline
FinTech Ethics and Risks is a six-week, six-module course. Each weekly module compiles 5-7 sections, consisting of 15-20 learning units. In each learning unit, there is a short lecture video, followed by learning activities such as Quick Check questions, Polling, Word Cloud and Discussions. In addition, there is a range of additional resources provided, including research papers, news articles, industry reports, and useful links. There is a Conclusion Quiz at the end of each module.
Discussion is a very important part of your learning experience in this course. The course instructors will post questions and discussion prompts under each topic, and selectively comment on your responses. By the end of the course, active participants will also be invited to be discussion moderators and community TAs for the next course cohort.
Course Outline:
Module 1: The Ethics of Finance
This module will provide a historical and broad perspective of ethical issues relating to finance and the introduction or adoption of emerging technologies.
Module 2: Blockchain and Its Governance
This module will expand off the Introduction to FinTech course , to consider the most relevant and ethical ways such technology should be implemented, in a number of different industries or product segmentation. In particular, data collection, customer privacy, and transactional issues will be covered in this module.
Module 3: Cybersecurity and Crimes
FinTech can make it easier and cheaper for banks to monitor and control financial transactions, thus reducing fraud and reducing bank costs. But at the same time, these tools can be used to steal money and other corporate secrets, hide illegality (including purchases of weapons, drugs, etc.), and finance terrorists and other criminal organizations. Accordingly, this module will consider the implications of such important issues.
Module 4: Artificial Intelligence & Fintech
In this module we will consider the implications of building our own concepts of “human” morality into amoral machines, as well as a consideration of whether human biases and prejudices can or will be built into such mechanisms, whether purposefully or unintentionally.
Module 5: A Decentralized Future
One of the key reasons people are calling for FinTech is for its decentralized nature, thus democratizing finance, and allowing regular people to participate more fully and affordably in financial transactions through technologies like cryptocurrencies, non-government issued IDs, and P2P lending. In this module we will address some large questions, considering whether FinTech should lead to a decentralized, democratized system of finance. Or whether existing institutions will adopt FinTech strategies to cement their existing hold on the financial markets.
Module 6: Positive Impact of FinTech
In this final module, we will consider some of the many outstanding questions and purposes of introducing FinTech to the world, exploring the many ways that FinTech can both help and hurt society. We will discuss financial inclusion, sustainable development, and many other positive aspects of FinTech development. Conversely, we will also consider how these same technologies and solutions could potentially be used to inhibit access to financial markets, or worse.
Module 1 The Ethics of Finance 1.0 Course Introduction
Fintech Ethics and Risks. We are excited to embark on this learning journey with you, and we genuinely believe that the principles we will explore together are at the heart of one of the great debates that humanity will need to address in our lifetimes. Over the last few months, as we have prepared this course, this reality has become even clearer to us. Advances in technology, especially those related to financial technologies or “FinTech”, are already starting to impact us and will eventually become so pervasive that they will become a core part of our existence. Because of that, we felt compelled to teach this course, in order to collectively consider, with you, key principles and questions about the nature of how we want to manage technological change as it intersects with our lives. In developing new technologies, it is quite clear that perhaps the key focus is whether something can be developed or created, in essence captured by the question, “Can we do it?” This important question has been an engine that has driven human progress and technological advancement. There is, however, another equally critical question that is usually not asked, which is: “Should we do it?” This question is incredibly important because it forces us to consider the impact of new technologies at their genesis, and not when it’s too late or too difficult to mitigate negative aspects of the technology that were not initially considered. So at its core, this course is about considering the impact of new technologies, especially FinTech, before they are so mature and embedded that they cannot be managed. To kick-off our journey, we will first consider the history of finance and its role in society before moving on to an interesting case study about the financial institution, Wells Fargo. Then we will lay out five key principles that frame the course. These five principles are: trust, proximity, accountability, cultural lag, and privacy. And then we will return to each of these principles repeatedly through the rest of the modules. Lastly, while the nature of ethics sometimes requires an exploration of the dark realities of life, please don’t mistake that for our lack of enthusiasm about the future. If thoughtfully managed, we believe FinTech is a key to a utopian future where society is more fair, just, and inclusive. Thank you again for joining this journey. It’s important because we have a collective choice to make about that future. 1.1.1 What Is Money?
Before we begin exploring fintech in greater detail, let’s take a few minutes to consider the history of finance, and what purpose it plays in society. To do this we will consider three questions: One, what is money? Two, how do we value money? And three, why do we have banks? Answering these questions will help us understand the rise of fintech, and the moral underpinnings that make up the foundation of the industry. And for those of you in the finance space, please bear with us for a moment. This course is being taken by diverse people from all over the world, and is going to cover some pretty complicated ideas. We need everyone to have a clear foundation on some of the major principles so that we can move into some of the more advanced concepts. Now in reality, society is at a stage where we all should take a step back and review the nature of the finance industry. So whether you’re new to finance, or a savvy industry veteran, let’s revisit these foundational principles together. A few weeks ago while I was walking my 7-year old daughter Lola to school, she asked me a question that caught me completely off guard. She looked up to me as we were walking hand-in-hand and said: “Daddy, what is money?” I was confused by the question, and started mumbling something about bartering, and working and that we use money to represent value. But no matter what I said, she just kept repeating “that doesn’t make any sense, that doesn’t make any sense. Money is just paper and is not worth anything.” Well, what I failed to explain to her 7-year old mind is a key lesson of finance upon which much of society is built: that the value of money is a social construct built on trust. Let’s look at this another way: close your eyes and imagine you were just given a million dollars. And really, close your eyes – just trust me for a minute. Now, picture it. Really try to think about what you could buy for a million dollars. But now wait a minute – I didn’t tell you the currency of the million dollars. Think about how different that consideration would be if the currency was US dollars versus Hong Kong dollars, or some other type of dollar. As you probably know, the value of money fluctuates based on the relative value of the currency. And this calculation also changes depending on the time: so a currency may be more or less valuable today than it was yesterday. This has been starkly evident when observing the massive fluctuations of cryptocurrencies like Bitcoin over the past few years. Okay, one last consideration: close your eyes again and envision what you can buy with US $1 million dollars. Try to picture it. You could buy a nice home, a fancy sports car, or finance a trip around the world several times over. Now, picture that pile of cash and what it would look like. Maybe even consider throwing it out over a bed and just rolling around in it for a while. Now imagine you are stuck on a deserted island. You are starving, thirsty, maybe scared. You have that same pile of money – but what can it buy you now? Are you going to be able to negotiate with the apes for some of their bananas with that money? In that context, the dollars might be more valuable as kindling to help you start a fire! Or imagine if a small boat pulled up to the island with the ability to rescue you, but the price of your rescue was the entire one million dollars. Would you pay it? Okay, so what’s the point? We share these stories because before we get too far in this course, we need you to understand a couple of things. The first thing is that value is subjective. And when we say value, we are referring to both material value – which is the value that we ascribe to goods and services – but also the value that we place on morality and personal connections. The second thing that we need you to understand is that the very concept of money is largely a social construct, something we have invented as a medium of exchange and to which we have prescribed a specific value. As my daughter Lola noticed at 7 years old, in a vacuum money by itself isn’t really worth anything. And when stuck alone on an island, your banknotes carry little value. So if currency by itself is essentially without value, then why is it so important and coveted so highly? To understand that, we have to go back a few centuries. 1.1.2 How Do We Value Money?
So if currency by itself is essentially without value, then why is it so important and coveted so highly? Let’s do another thought experiment to explore the answer. This time imagine you live in a small European town maybe 250 years ago. You can select your occupation – maybe a blacksmith or cobbler, or, more commonly at the time, a farmer. Back then most economic enterprises were small family enterprises and everyone knew each other intimately, and a lot of transactions were based on a barter system. So if you raised chickens, you could trade your eggs for whatever else you needed. For example, if you wanted rice, you would need to find a rice farmer who had spare rice to sell – and who wanted eggs – and then agree on how much rice an egg would get you. Back then, most transactions were proximate, meaning they were directly between two people meeting in person. In that type of proximate, one-on-one scenario where you both live in the same small town, deceptive sales practices are far less likely because merchants relied on their good name. As you can imagine, it would be pretty awkward if you cheated someone since you would have to continue running into each other in your small town. As a result, this type of personal connection to the community significantly increased the trust within the marketplace. So as you probably can understand, this type of proximate, one-on-one barter system is fairly limited. If you were a teacher, for example, and you are providing education, then how many eggs is an hour of education worth? So over time money was introduced as a common medium of exchange, making trade easier and giving rise to service industries and other knowledge-based professions. But here’s the challenge: let’s say you start to receive currency for your eggs rather than bartering for rice or other goods. How can you ensure the value of that currency? Imagine if for your entire life you’ve been bartering for goods, and now instead someone wants to hand you a piece of paper and say that it is the equivalent value of that particular good. As we discussed earlier, in a vacuum the currency in your wallet has little inherent value. Its value exists simply because we have decided that it does – and society has decided to use it as a medium of exchange. As my daughter Lola said, that really doesn’t make any sense! Well actually it does, but only when based on a very broad social construct founded on trust. We trust that the money we hold in our hands today will have some meaningful value tomorrow. Even in today’s much more complicated marketplace, trust remains the basis of our monetary system. Our money today is not backed by a physical commodity like gold. It only has value because governments have declared it to be a legal tender, which is what we often call fiat money, and people believe or trust that such status will continue. If you remove trust from the financial system, then the entire thing crumbles, as we have seen happen during financial crises around the world like the one that is sadly occurring in Venezuela right now. Additional Readings 1.1.3 Why Do We Have Banks?
Okay, so we’ve done some interesting thought experiments related to money and how we value money, and particularly about how our financial system is based on trust. But then what is the “financial system”? What does that even really mean? Finance and the financial system largely refer to the services related to the management of money. And, as mentioned earlier, this often requires a relationship of deep trust, or what we would call today a “fiduciary relationship.” And if our trust in money is largely a social construct, our trust in the financial industry is largely an economic and legal construct. In other words, we rely on contracts and the law to enforce our rights, rather than intimate social relationships as was common back in Europe 250 years ago. Although finance is made of many types of institutions, banks have always been at the heart of the industry. So let’s take a minute and explore the traditional purpose of banks. Why do we even have them? When I think about a bank, the first thing that comes to mind is a physical location where people go to deposit money and withdraw money. Or to put it really simply, it’s the place you stick your money to keep it safe. But this is only a part of what banks are for. During the industrial revolution, the traditional feudal system broke down and new industries started popping up all over the world. This led people to start moving away from farming jobs and into manufacturing and service roles, which for many families meant that they had discretionary income for the first time. As a result, if they didn’t want to hide it under their mattress, they needed a safe place to keep the money. And with the rise of entrepreneurship and companies during that era, it also meant that many people sought loans for starting businesses, buying homes, and other consumer necessities. As a result, the financial industry really started to thrive, and banks popped up all over the place. These banks served four primary functions, which really haven’t changed that much even until today. First of all, banks give people a way to save money safely. This makes sense – I’m sure you’ve seen a TV show or movie that included a bank heist where criminals broke into a vault. The vaults have huge doors, thick walls, complex security systems, and most importantly – lots and lots of cash, gold, and other valuables. And although some of this has changed, especially with a lot of currency and banking have become cloud-based and hosted on servers rather than in vaults, security is still the number one reason why so people use banks to hold their money. The second traditional function of banks revolves around financing. As economies started changing, people started exploring new uses for credit and capital, such as what many people carry around in their pocket – the credit card. Now this is a form of financing. The expansion of consumer credit has been the key driving force in enabling many people to move from low-income to middle class around the world, and traditionally banks have been the best source of consumer credit. The third traditional function of banks is to facilitate investments. So, without getting too complicated, let’s use a simple example. Let’s say one day you receive your paycheck – and after paying all your bills you have some money left over. And you decide that you want to save that money. But instead of saving that money in your bank account, you say: “hey, I want to buy a mutual fund, or I want to invest in the stock market by purchasing shares in a company that I like.” Banks are at the core of that type of investment activity, and that is an important role they play in society. The fourth and final traditional function of banks revolves around providing financial advice – and often helping companies or individuals make the best use of the money they have at their disposal. Because of these four reasons, banks have been trusted community partners for centuries, and one of the key reasons for the rise of the middle class throughout the world. But are things starting to change? Additional Readings 1.1.4 The Loss of Trust in Financial Institutions and the Rise of TechFins
As we mentioned, finance is largely built on trust. And in the past, banks served as the guarantors of trust in the financial world. But trust in financial institutions has diminished pretty significantly in many countries over the past decade. This of course is largely due to the Global Financial Crisis, which affected millions of people around the world. I remember that time vividly. Back then I was still practicing law full time in Hong Kong, one of the world’s financial centers. So most of my friends, colleagues, and clients were deeply affected by the crisis. Even so, we were only able to watch as the near collapse of the global financial system occurred. David Lee and I, we watched many of our friends as they were terminated from jobs with little warning. We had a daily reminder of how flippantly certain members of the global financial community pursued profits at the expense of their customers, and raising concerns that government regulators were not adequately supervising the financial industry. The crisis and its aftermath highlighted how the financial system and banks failed to perform some of the chief roles they were meant to perform for our society, particularly in managing risk and allocating capital. Millions of people around the world lost their homes, their savings, essentially their futures. In the US alone, it was estimated that American households lost $20 trillion dollars in wealth as a result of the Financial Crisis. And as a result, it might not surprise you that many people began to distrust the very institutions that were meant to protect and serve them. And the age-old characterization of bankers as greedy, selfish, short-sighted, bloodsuckers returned in full force. Let’s be honest: many large financial institutions have not done much since the Financial Crisis to reduce our concerns, with multiple high-profile scandals only helping to hasten the rise of FinTech innovations outside of the traditional financial sector. Over the past ten years, due in large part to a combination of the Financial Crisis and the advent of the smartphone, a major shift has occurred, characterized by the rise of what we call the TechFins – digital platforms like Facebook, Amazon, Google, and Tencent – that provide e-commerce, peer to peer lending, communications, and increasingly serve as the keepers of our digital identity. But after more than a decade of explosive growth, many of the TechFins are themselves embroiled in controversy, once again leaving customers wondering who they can trust. Data privacy breaches and little accountability have caused many people to question their use of these large technology platforms. But the fact remains: people still need financial services. So who will step up as the trusted partners of the future? Of our future? Let’s consider that question as we dive into our first case study. Additional Readings Buckley, R. (2016). The Changing Nature of Banking and Why It Matters. In R. Buckley, E. Avgouleas, & D. Arner (Eds.), Reconceptualising Global Finance and its Regulation (pp. 9-27). Cambridge: Cambridge University Press. doi:10.1017/CBO9781316181553.002 1.2.1 Case Study – Wells Fargo
Banks have used the past decade since the financial crisis to rehabilitate their image, some more successful than others. But one bank has recently gone above and beyond in reigniting the general public’s disdain towards financial institutions. If banks are built on the foundation of consumer trust, Wells Fargo has systematically dismantled that trust leading to an uncertain future for that institution. Wells Fargo has a really interesting history. It was established in 1852 in San Francisco during the gold rush, and as a result has long been an integral part of the American financial landscape. When gold was discovered in California, Wells and Fargo – two entrepreneurs – decided to provide services relating to the transport and safe keeping of gold dust, gold coins, salaries, and other critical resources all across the US Western frontier. You may have seen before, but the stagecoach is actually the logo or symbol for Wells Fargo bank. Before the advent of railroads, stagecoaches were considered the safest and most reliable form of transportation for people and valuables across the dangerous deserts of the Southwest United States. This was the age of the American cowboy, and those stagecoaches were the targets of some of the most notorious bandits of the time. You have probably seen movies with this type of a scene – where a stagecoach driver and a guard sit on a seat. They usually carried sawed-off shotguns and revolvers, and often had to fight their way past bandits in the rugged terrain. Anyway, this is important because once again, the crux of the entire business model was based off of trust. Trust that the Wells Fargo coach drivers wouldn’t steal the gold dust and bars they were carrying. Trust that the stage coaches and the roads built would provide reliable transit to ensure payment of railroad employees. Wells Fargo was so trusted by the railroad tycoons that it quickly established the largest fleet of stage coaches in the world, helping to build one of the oldest and largest banks in the United States, eventually employing more than 200,000 people globally. In September 2016, news emerged that employees at Wells Fargo, the world’s most valuable bank in the world at the time, had created millions of fake bank and credit accounts that customers had never authorized. Due to a high-pressure sales culture and an incentive-compensation program for employees to create new accounts, Wells Fargo employees had engaged in an array of immoral practices, such as fraudulently opening accounts, issuing ATM cards and assigning PIN numbers, faking signatures and using false email addresses. Customers had subsequently been hit with late fees, overdraft charges, annual fees, and other costs – all of which could affect their credit scores. When customers noticed the charges, employees would apologize and lie, saying oh, there just had been an administrative mistake. This dishonest program was based on the internal goal of selling at least eight financial products to each customer, or what Wells Fargo called the “Gr-eight initiative.” These products included credit cards, savings accounts, investment accounts and more. Why eight you may ask? Because eight rhymed with great! No joke, that’s what they decided: the CEO said “because eight rhymes with great,” therefore they arbitrarily decided that each customer should have 8 accounts with the bank. Selling different accounts to bank clients is commonly known as cross-selling. Basically, if you go to a bank and open a savings account, they might ask you to open a checking account, or buy an insurance plan. This is called cross-selling, and they wanted the average Wells Fargo customer to have 8 such accounts. Why? Well, in part because it allowed the bank to make more money in fees. But to be honest, the fees were minimal and Wells Fargo didn’t really make much money of off them. So then why would they do it? Why did the bank put so much pressure on their staff to cross-sell and push 8 accounts that managers across the bank started creating fake accounts? The reason is because Wall Street analysts used data like “new accounts opened” as a key metric when evaluating a bank’s share price. That means, the more customer accounts Wells Fargo can show, the higher their stock price went, even if Wells Fargo really wasn’t making any additional money. And when analysts saw all the new customer accounts, the share price for Wells Fargo doubled between 2012 and 2015. And who makes money when the share price goes up? Well shareholders will, but in particular the executives and directors of the company who are compensated primarily in stock options. So, in other words, even though Wells Fargo wasn’t making more money, or serving its customers better, the value of the shares doubled making a lot of money for the bank’s executives – the very people who created this horrible practice in the first place. The high-pressure sales culture created by Wells Fargo bank executives, where you could face getting fired if not hitting the cross-selling goals, created a toxic environment that pushed employees to fear for their jobs and make bad ethical choices, all while management turned a blind eye to the practice. The program finally became public years after Wells Fargo’s management knew about the problem. When asked why he didn’t notify government officials as soon as he learned about the problem, then-CEO John Stumpf said, that the amount of money made by Wells Fargo from the program was immaterial to the bank’s size – and thus not important. Of course, this incensed the public and lawmakers alike, and they demanded action. So what did Wells Fargo do? Well, they didn’t replace any of their senior management. Instead, they terminated nearly 5,300 mid-level employees, stating it was their fault for making up all the fake accounts. Not a single top level executive was fired at that time. Once again, this did not seem sufficient to the public and US lawmakers. US senators grilled Wells Fargo’s top management, and the media carried story after story detailing the bank’s actions – or perhaps more accurately, inaction. After mounting pressure, then CEO John Stumpf stepped down, as did did Carrie Tolstedt, the head of the community banking division at Wells Fargo. But don’t feel too bad for either of them. For example, when Ms. Tolstedt left Wells Fargo she received about $125 million USD equity compensation as a retirement package. All in all, Wells Fargo had engineered what one analyst described as a “virtual fee-generating machine, through which its customers were harmed, its employees were blamed, and Wells Fargo [and it’s executives] reaped the profits.” In light of the scandal, Wells Fargo and its new CEO, Tim Sloan – who was the bank’s former COO – emphasized that they would initiate refunds “as part of [their] ongoing efforts to rebuild trust. But Wells Fargo’s problems didn’t end there. Their unethical internal culture had permeated several of their businesses, leading to a string of scandals and investigations. For example: In July 2017, Wells Fargo admitted to forcing up to 570,000 borrowers into unneeded auto insurance. Reports also emerged that 110,000 customers had been incorrectly charged “mortgage rate lock extension fees” between September 2013 and February 2017. And last year news also emerged that a computer glitch at Wells Fargo caused hundreds of people to have their homes foreclosed on between 2010 and 2015. As a consequence of these numerous scandals, the Federal Reserve announced on February 2, 2018 that Wells Fargo would not be allowed to grow its assets until it cleared up its act. An unprecedented punishment. In May 2018, Wells Fargo launched a marketing campaign to emphasize the company’s commitment to re-establishing trust with its stakeholders. The commercial opens with the Old West origins of the bank, depicting its transition from horse riding, the iconic stagecoach, the steam boat, the train, its branches, its ATMs, now and its mobile systems – portraying its whole technological journey. The video then goes on to make references to the scandals, and illustrating how it is now a “new day at Wells Fargo.” That new day and attempt to re-establishing trust, may have been another attempt in vain. Because, just a few months after, in August 2018, the Justice Department of the US government announced that Wells Fargo had agreed to pay a $2.1 billion fine for issuing mortgage loans it knew contained incorrect income information. The government said the loans contributed to the 2008 financial crisis that crippled the global economy. If trust is a key component for the financial system and banks, what does the experience of Wells Fargo tell us about the financial system today? Do you feel like the Wells Fargo example is an outlier, and that most of the financial industry today can be trusted? Or, are you skeptical about the ethics of the broader industry as a whole? Additional Readings 1.2.2 Case Study – Wells Fargo: Breach of Trust
Okay, this is a crazy case that a lot of people in the financial industry were really, really concerned about. So why is this case so important? I mean, there seems to be a lot of financial crime out there, people not doing great things all the time – what made this particularly special? Yeah, it’s a good question because, again, the actual money that Wells Fargo made from this really wasn’t a lot, so in terms of financial crime it wasn’t that significant – and yet a lot of people were really upset about this. Some financial analyst even said that this was the worst financial crime ever. And I think the main reason is because, you know, for you out there, for me, I choose a bank solely because I need to know that I can trust them. Right. And here in this particular instance, they completely betrayed that trust and seemingly for completely selfish and greedy reasons. So, when you say selfish and greedy reasons. What do you mean by that? Well, again, there really was no benefit to the customer here. So again, when you open a bank account and you put some money there, you’re not anticipating that they are going to do all these shady things behind your bank; they are going to open up accounts, or make you get insurance, that you know nothing about. And in this particular instance, I feel like, it was just complete dishonest and betrayal of trust where there was no benefit to the consumers whatsoever. So they didn’t for example, they didn’t do any research and say customers are better off if they have 8 accounts, they simply said that 8 rhymes with great, and so therefore we’re gonna do this. Okay, so then who did benefit from this kind of activity? The senior staff, the CEO, various high-level people within the company, specifically those that had stock options for example. Because, again, even though, it’s very unique right, because the bank didn’t make very much money off the unethical behavior directly, the reason they made money is because their share price doubled within a short period of time, so they were able to sell off their shares and personally benefit significantly from this, but the bank itself didn’t actually receive a lot of remuneration. That’s interesting, so you’re saying that, from an economic perspective, the bank did not make any money from this? But somehow these extra accounts they created, increased the share price, because Wall Street analysts saw this as some sort of metric that the bank was growing. [Yeah, exactly.] And so, in terms of market value, it seems that it was increasing, but in terms of actual economic value there was actually no real value that was added by this behavior. So, basically the explanation is like this: the bank itself, when it does transactions, they make money out off it, just like you’d make money if you sell hamburgers or whatever. And the bank from these kind of unethical, even illegal, behavior, only made they think between 1.5 and maybe 2.5 million dollars from these transactions. But here’s the thing, their share price more than doubled, which means that the individuals that owned those shares including the CEO and various senior officials who were pushing this behavior, they made hundreds of millions of dollars collectively and they walked away with almost all of that. Now, there was some clawbacks, there were some issues where they had to give up some of that money, but again, they walked away although in disgrace, they walked away with 100s of millions of dollars. And how much were roughly the fines, and things that Wells Fargo had to pay because of this kind of behavior? Yeah, again, this is the terrible thing. Again, if you’re the customer of a bank, and you think that you want the bank to be led by people with integrity because you want insure that your investment is safe, here’s the rub: they individually made 100s of millions of dollars, and then when they left the bank in disgrace, the bank ended up paying 100s of millions of dollars in various fines and legal fees – potentially over a billion dollars more recently – where they are going to have to pay these massive fines. And that doesn’t even include the reputation loss, and so municipal governments, state governments, that completely removed their business from Wells Fargo, which means it made it impossible for them to continue growing – or, not impossible – but it’s certainly hurting their bottom line. And it was so bad that the federal government in the US actually kind of stopped their growth: saying, you gotta clean this stuff up because you’re not running this in a reputable way. So, seems like there is a tragic irony here that the people who at least allowed that behavior to occur, or at least on their watch, they were able to benefit from it and walk away, and the bank and the fines it has to pay, are really being borne by the current shareholders and the current other stakeholders, such as customers and employees – that have to deal with the fallout of all this. Yeah, and that includes all of you by the way. So, think about it, if you’re gonna use a bank, if you’re gonna use them for services – how would you feel if you knew that they betrayed your trust in that way. How do you kind of move on from that? Additional Readings 1.3.1 Key Ethics Principle – Trust
After learning about the Wells Fargo case, what were some of the underlying thoughts that you had about the case? Did the actions of the bank leaders surprise you? And would you trust Wells Fargo as your bank after learning what they did? You might be surprised to learn that some financial analysts said this was the worst financial scandal of all time, primarily because Wells Fargo acted so completely contrary to the interests of its customers. What do you think? When studying ethics, it is often helpful to use examples like Wells Fargo and other cases to consider possible outcomes and actions in real life ways. Throughout the course we will share cases like this in part to help you learn specific principles, but also to help you to create value judgments for your own life. To help you create a moral code, so to speak. By so doing, we hope that you will come to a clearer definition of personal ethics in your own life and career. And while there are many different ethical concepts that we could discuss throughout the course, we are primarily going to focus on five key ethics principles. Those five key ethics principles are: trust, proximity, accountability, cultural lag, and privacy. Some of these concepts, like trust and accountability, will be really familiar and easy to understand. But some of the others, especially proximity and cultural lag, might take some additional study. And please also keep in mind, even though the basic premise of some concepts might be familiar and easy to understand, the challenge is to extrapolate out and consider how those concepts are going to affect us as technologies change in the future. For example, while we all understand the basic meaning of the term “privacy,” how do you think that concept will adapt and change with the advent of AI and facial recognition software? In this class we will ask you to look into the future a bit and try to predict what likely but unexpected consequences will result, whether good or bad. Okay, so let’s get started with the first key ethics principle: trust. We already mentioned trust a lot in this module, and this is probably the simplest concept to understand. For example, it doesn’t take a finance or law degree to understand that the deceptive practices of Wells Fargo and its staff was incredibly unethical, and likely criminal. So we are not going to dwell too much on the concept of trust now. But it is worth repeating yet again that the entire financial system is built on trust, and therefore the bulk of criminal financial law punish any breach of trust, or what we professionally call “fiduciary” obligations. And as a side note, for those of you who are familiar with the term “fiduciary,” it might interest you to know that the Latin root for the word literally means “one who holds something in trust.” Whether it was 250 years ago in a small European village where everyone knew each other, or in the much more complicated global marketplace that we have today, we have to understand that without a certain level of trust, the entire economic system comes crumbling down. Both traditional financial players and new fintech innovators must keep this in mind, and ensure that their products and services continue to enhance trust. In fact, because financial institutions play such an important role in society, and since most people are so clueless about complex financial products, most countries actually have disclosure requirements, meaning that banks have to be truthful and transparent with their customers, making sure they understand the nature of what they are buying or investing in. If banks are not forthright about material information, they can have significant penalties, including fines and possibly jail time. In other words, financial institutions have a higher level of trust placed on them by society, so therefore they have higher penalties if they breach that trust. As a result, one of the major considerations relating to FinTech revolves around the need to ensure that all fintech applications and innovations enhance social and consumer trust, rather than diminish it. It would be unethical, for example, for unsafe or unclear financial products to be introduced into the market via a new FinTech app. Unfortunately, some early iterations of fintech have only caused the public to question the ethical use of these technologies. For example, the use of cryptocurrency to facilitate crimes has caused many people alarm. We need to address these concerns right from the beginning and ensure that tech innovators and finance professionals consider not only the bottom line, but also the importance of maintaining balance and trust in society. 1.3.2 Key Ethics Principle – Proximity
The second core ethics principle that we will be discussing throughout the course concerns the concept of proximity. In psychology, the concept of “proximity” is a key variable in explaining behavior in many circumstances. Proximity denotes both how physically close or emotionally close we are to someone or something. And differences in proximity can lead to varied outcomes. One story that demonstrates the impact of proximity, is the classic trolley problem. You may recall a teacher explaining it to you when you were younger. If this doesn’t ring a bell, don’t worry, we’ll do a quick recap. The typical version of the trolley problem usually compares two scenarios where there is a runaway trolley about to hit a group of five people. In the first scenario, you have the choice to divert the trolley with a switch, pulling a lever which would change the trolley’s direction and kill one person instead of the group of five. In the second scenario, instead of a switch, you are required to physically push a person in front of the trolley to stop it – thus saving the group of five – but killing the person you pushed. Both actions lead to a similar outcome, and yet the way that our brains process the situations is completely different. The trolley problem has been reviewed and studied many times, and each case, nearly everyone opts to divert the trolley using the switch, and nearly all object to pushing a person into its path. This dichotomy highlights the importance of proximity in people’s decision-making. If an action is proximate, physically or emotionally, then we often rely on the “moral” center of our brain to consider the dilemma. That is represented by the fact that almost everyone chooses to not push the man on to the tracks directly. Conversely, if an action is non-proximate in nature, meaning the action and its outcome are separated even slightly, then we often rely on the “logic,” or cost-benefit center of our brain to consider the dilemma. That is represented by the fact that nearly everyone opts to pull the lever, even though the action leads to nearly the same outcome as pushing the man. Now this is very important because our world is increasingly distant and non-proximate in nature, resulting in our leaders increasingly using amoral, cost benefit analysis when making decisions that can affect broad sectors of society. Let’s recall the Wells Fargo example we just discussed. If you compare Wells Fargo, a large, international bank, to perhaps a bank in a small town, the role of proximity is pretty clear. Psychologically speaking, it’s generally much harder to cheat people we are proximate to, people we interact with on a daily basis, compared to a customer that is just a number, one person that is part of a large mass. Accordingly, the concept of proximity applies to FinTech also. One great outcome of FinTech is that it will provide financial access to a greater number of people, those that are unbanked or underbanked. At the same time though, this technology will probably require less human interaction, meaning less proximity as well. So does that mean as proximity declines, people will lean towards cheating each other more? Who knows, but what is clear is that we want new innovations to bring us closer together and not drive us further apart. Additional Readings Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI Investigation of Emotional Engagement in Moral Judgment. Science , 293(5537), 2105-2108. Retrieved from https://science.sciencemag.org/content/293/5537/2105 1.3.3 Key Ethics Principle – Accountability
The third key ethics principle that we will discuss throughout the course is accountability. Accountability is really a subset of governance and regulation, and is essentially a question about fairness and who is responsible when things go wrong. Many of the governance structures that we rely on in society try to make it clear who is accountable when a problem arises. But as you will see throughout the course, as the world gets less and less proximate, it is simultaneously getting harder to determine who should be held accountable for certain injuries. And FinTech innovations may be making all of this even harder. Consider the Wells Fargo case we discussed earlier, were the people responsible for violating customer trust actually held accountable? As mentioned, the bank’s initial reaction was to terminate 5,300 mid-level managers for their involvement in the program. But what about the leaders who created and pushed the program? It seems pretty clear in that case there was an accountability gap. This question of accountability is also relevant for technology. Consider a social media platform that you frequently use, say Facebook, Twitter, YouTube or their equivalents in your country. If there is inaccurate or even harmful content posted there, who is accountable for that? Surely, we would say the individual that created and posted it. But should the technology platform itself that’s hosting the content also be responsible? This is an important question and in the wake of fake news and some really tragic incidents, there is understandably a lot of debate about who should be accountable. In some countries like Singapore, we may have an initial answer. Singapore is planning to implement a new law that will require online media outlets to issue warnings, possibly correct, and in some situations even force companies to take down content that is false. Prior to this, such platforms could act at their own discretion to close accounts or limit false information. Perhaps, not anymore. The United Kingdom may eventually go even further in their efforts to regulate the internet through a recently proposed law that would make technology companies more legally liable for the content they host through fines, penalties, and direct litigation. Areas that the possible new law would cover include content that supports violence, terrorism, promotes suicide, spreads false information, and even cyberbullying. So when considering accountability for technology companies, including FinTech firms, it seems that society may no longer be satisfied with attempts at self-regulation, which then raises the broad question of how large technology companies should be regulated. Additionally, should TechFins be regulated and treated differently than large financial firms? What should be the standard and should that standard be global? 1.3.4.1 Key Ethics Principle – Cultural Lag
The fourth key ethics principle that we will discuss throughout the course is cultural lag, which is the idea that it takes time for culture to catch up with technological innovations, and that social problems and conflicts are caused by this lag. Till now we have talked mostly about finance, but FinTech is not only about finance; that’s the Fin, but there’s also the Tech, the technology. And cultural lag considers the best way to ethically introduce new technologies into the marketplace. Technological innovations are often characterized by one word: Disruption. If you pay attention to Silicon Valley, it seems like someone is talking about disruption every few minutes. “We’re going to disrupt this industry.” Or “This innovation is built for disruption.” And while not everything out of Silicon Valley is really “disruptive,” many amazing disruptions and innovations have propelled humankind. And the pace of disruption seems to be increasing. Humankind has progressed more technologically in the past 200 years than the previous 20,000 years combined. But is disruption always good? And even when the overall impact is positive, are there ethical issues that should be considered when introducing innovative disruptions? The answer obviously is yes, but we seldom talk or think about these ethical questions until after the technology has been introduced, which is often too late. As mentioned in the Introduction to Fintech course, as human beings, we tend to overestimate the effect of technology in the short run and underestimate the effect in the long run. This seems obvious, right? For example, just think about the far-reaching impacts that smartphones have had since their introduction. Can you believe that smartphones were first introduced only around 10 years ago? I guess for some of you younger students, that might not seem like a long time ago. But for a lot of us, that seems like only yesterday. Either way, the point is that it has only been 10 years, but think about how much of an impact smartphones have had! Pretty much everyone has them, and that includes a large swath of the developing world. And many of the most amazing FinTech innovations are only possible because of the smartphones that all of us are carrying around today. But here’s the thing: smartphones became popular so quickly that we, as a society, didn’t really have time to understand the implications of the technology on our broader culture. And every time we started to adapt and adjust to the technology, tech innovators would adjust and make some new feature to stay ahead of our adjustment period. These are all examples of cultural lag, and show that technology is able to change more quickly than society can culturally adapt to such innovations. And there’s one important aspect of cultural lag theory that we need to understand: sociologists and economists believe that many of society’s most challenging problems are often caused by cultural lag. Again, think about smartphones. Experts in many disciplines are now emphasizing that smartphones are actually creating or reinforcing serious social problems. We have all heard reports that emphasize that we spend too much time looking at our smartphones focusing on social media to the exclusion of our actual social circle. After a decade of not really understanding the implications of these habits, people are now working to reduce their screen time, and many technology firms like Apple and Google have introduced products to help track and even lessen screen time, encouraging users to spend less time on their phones. There are many more serious examples highlighting the gap between changes in technology, which occur very quickly, and subsequent adaptations in our culture, which happen very slowly. And the smartphone example is only a very simple example. The reality is that some of the biggest problems society faces – things that are so big that we sometimes have trouble seeing or understanding them – are often tied to technological disruption and the cultural lag that stems from them. And while these massive innovations are rightfully celebrated for their positive impact, it’s worth considering some correlated points. For instance, what happens to all the people who work in industries that are made obsolete because of the new technologies? Certainly a lot of people have benefited from technological innovations, but not everyone does. Or at least, maybe people don’t benefit equally. Is it morally necessary for new technologies to benefit all of society? And even if that is possible, should that be an overall goal? Should that be a normative aspiration of new technological innovations? Let’s consider another example: drones. Do you have one? Or do you know someone who does? They are now pretty popular, and became so popular so quickly over the last few years that governments were caught off guard without regulations specifically covering private drone use. And there are some scary aspects of drone use that people may not have considered previously. For example, people have weaponized drones, with one drone even being used to attempt an assassination of a state leader. And while some companies are using drones in Africa to deliver blood for transfusions, there are also people using drones to drop contraband into jails and prisons, or to smuggle drugs across borders. When considering cultural lag, laws are some of the slowest changing aspects of culture. It can easily take years for even simple laws to be enacted. As a result, when drone technology rapidly advanced, making them affordable for almost anyone, governments raced to catch up, creating regulations to help balance public safety with personal recreation. As is probably clear, it’s hard to hold someone accountable for improper drone use if there is no law defining proper drone use. Thus, the cultural lag created between the rapid advancement of drone technology and the much slower development of drone-related laws has created some serious concerns, including disruptions of airports, concerns about privacy and use of drone cameras around personal residences, military installations, and other sensitive locations. So when new technologies are introduced, and these gaps or lags are created, who should be responsible for the negative consequences? The innovators? And the inventors? The government? The users? Governments around the world have been grappling with questions like these for a long time, and some disruptive FinTech innovations are going to pose very significant challenges for regulators – and some already do. Additional Readings Ogburn, W. F. (1957). Cultural Lag as Theory. Sociology & Social Research , 41(3), 167-174. Marshall, K. P. (1999). Has Technology Introduced New Ethical Problems? Journal of Business Ethics , 19(1), 81-90. Retrieved from https://www.jstor.org/stable/25074076?seq=1#metadata_info_tab_contents Brinkman, R. L., & Brinkman, J. E. (1997). Cultural lag: Conception and Theory. International Journal of Social Economics , 24(6), 609-627. Retrieved from https://www.emeraldinsight.com/doi/abs/10.1108/03068299710179026 1.3.4.2 Productivity Shifts and Technological Revolutions
Okay, if you are watching this course, chances are that you work in some type of service industry like finance, law or accounting. If so, what is the difference between your chosen career, versus, let’s say, a farmer or some other type of blue-collar worker? For a lot of us, we choose our careers based on security – industries that we think are safe – but here’s the reality: you are a lot more like a farmer than you may realize. So now you understand cultural lag and both the challenges and moral implications that come from introducing disruptive technologies. From a FinTech perspective, there are some exciting disruptions that are right around the corner. And while these innovations will make many aspects of life much easier, there is one major challenge that we feel like we need to address: the social ramifications from unemployment and job loss. Okay, to get this point across, we need to go back in time again. Early communities of humans congregated around each other in villages for specific reasons. Obviously, protection and socialization were among those reasons, but there was one overarching activity that held early societies together: food. Early human communities revolved around agriculture, and many of the most important early innovations revolved around the growing, harvesting, and storage of food. During the Bronze and Iron Ages, stone and wooden tools were replaced by more efficient metal tools, but main processes in agriculture remained largely unchanged for thousands of years. But that changed quickly during the Industrial Revolution. In many parts of the world, during the Industrial Revolution, horse-drawn and even mechanized harvesting equipment were introduced leading to a vast increase in productivity. This not only sped up the time in which crops could be planted and harvested, but it also significantly increased crop yields. During that time, the number of people working in agriculture dropped, but the amount of land that could productively be used to grow crops grew substantially. This led to fewer but larger farms. To put it simply, fewer people were needed to farm, but simultaneously more food was grown. In fact, some historians contend that these improvements in agriculture “permitted” the Industrial Revolution because the increase in food production and decreased need for farm labor meant that more people could work in urban industries, providing labor for factories, large urban utility projects, and really all the innovation that led to the rise of the 20th century. But there were a few obvious problems that stemmed from this. First, these advances didn’t occur everywhere. For example, many of the countries that are still developing today did not participate equally in these advancements for a variety of reasons, and as a result, their economic progress was delayed. And even in developed countries where these advancements were adopted broadly, the benefits were not distributed equally. But there was another more serious issue. While the machines and tools that were introduced improved productivity, they also made many jobs redundant, leaving millions out of a job, and needing to transition to an entirely new industry. In the United States alone, agricultural jobs transitioned from 40% of the workforce in 1900 to only 2% in 2000. That’s only 100 years! And while that may seem like a really long time, in terms of human history that is incredibly short. So where did all the old farmers go? Well, the lucky ones were able to find even better jobs in manufacturing, or even in services related to the agriculture industry, like logistics, storage, or marketing. The point is that they had to reinvent themselves, and rethink the way they defined work. For many people, this was a great opportunity, but of course, others got left behind in the process. Now consider today. We now have unmanned drones that can plant seeds, spray and monitor the health of crops, and even harvest them. And artificial intelligence and machine learning are being integrated to help farmers make better decisions and monitor growth in real-time. As a result of all this, less and less human labor is needed to run mega farms. But for most developed economies, these changes started more than a century ago. But what happens when these modern technologies are integrated into developing countries today? Well, let’s look at an example. From 1990 to 2017 – just 27 years – It is estimated that in China agriculture employment went from about 55% of China’s population to about 17%. That’s a difference of several hundred million jobs. And while China has done a good job of expanding its economy and transitioning those workers to the manufacturing sector, you can see how difficult it can be when productivity shifts make workers obsolete. In fact, speaking of manufacturing, this is also happening in that sector as well. Throughout the US and EU, manufacturing jobs have been slashed through a combination of innovation and automation. Many of the workers who lost their employment have yet to be fully reintegrated back into the workforce, leading to significant political pressure and social anxiety. One question that bears asking is who is responsible for ensuring these new innovative technologies are integrated in such a way that social harm is minimized? Is that the role of the innovator? Is that the role of the government? Or someone else? Okay, so why does any of this matter? And what does it have to do with FinTech? Well, if predictions can be believed, we are about to enter the Fourth Industrial Revolution, which could bring the most significant disruption and productivity shifts humankind has ever seen. Artificial intelligence, blockchain, other new technologies will completely alter not only how we work, but our entire perception of what work is. Let’s use 2 concrete examples: cashiers and drivers. Cashiers – the people you pay when you leave the store – and drivers. Well, these jobs are the two most common jobs in the United States, and it’s the case in many other developed countries as well. Well, millions of those jobs are likely to be eliminated within the next 10 years as automation, driverless vehicles, and other FinTech innovations make them obsolete. In fact, it has been estimated that 38% of current US jobs are at high risk of being made redundant by robots and automation in the next 15 years – that represents about 60 million jobs, or 1/5th of the entire population of the United States. And although some new jobs will be invented that many of these workers will be able to take up, unlike previous productivity shifts, these newer innovations are largely replacing human workers completely making it difficult for the unemployed to simply shift back into new work. For example, for a farmer to go work in a factory, new skills are usually not required. But for a cashier or truck driver to become a computer programmer or robotics engineer, an entire new skill-set requiring years of schooling and training would be required. So now let me turn the question back on you: what are you doing to ensure that your profession and your career, you’re ensuring that you don’t become redundant or that you can stay ahead of these new emerging technologies? Additional Readings 1.3.5 Key Ethics Principle – Privacy
The fifth and final key ethics principle that we will discuss throughout the course is privacy. The debates waging around the world around the concept of privacy are the amalgamation of all the concepts we will cover in this course, including: trust, proximity, accountability, and cultural lag. Privacy is one of the key issues of our time, and is something that we need to start thinking much more deeply about. Among all their other problems already discussed, Wells Fargo also experienced privacy and data breaches. For example, in 2017 Wells Fargo accidentally sent out 1.4 gigabytes of files containing personal information of about 50,000 of its wealthiest clients, including their social security numbers and personal financial data. Luckily, that data breach was fairly limited in its reach, but what if the data got shared on the web for all to see? Who specifically should be held accountable for such a breach? That’s actually a surprisingly hard question to answer. Questions relating to the right to privacy are not new. But with the advent of smartphones, facial recognition software, machine learning, and other FinTech innovations, our right to privacy in a traditional sense is diminishing rapidly. For example, people are increasingly worried about the possibility of being tracked by their smartphone hardware. And many common apps have been breached or even actively misuse our private personal information. For example, Facebook has been embattled over the past couple of years for concerns relating to privacy. As one of the most actively used social media platforms in the world, Facebook is accused of allowing private customer information to be used for several unwelcomed activities. They have even been accused of allowing their platform to covertly influence political elections. Other new technologies, such as voice recognition products and wearable devices, have people worried about who is listening to and possibly recording their private conversations. Just think about it: on a daily basis the majority of us click “I Accept” on so many websites without actually reading the terms and conditions that we are desensitized to the fact that these are actually real legal agreements. If we as society are going to take privacy seriously, we need to consider the moral and legal implications in a practical context – and ensure that we are clear on what rights we are giving away. Maintaining a balance between privacy and profitability in the commercial sector, or security in the public sphere, is an increasingly important challenge. But has the age of privacy in the traditional sense already ended? Have we already given up so much data via social media and smartphones that there is no turning back? And should the race to create sentient AI, which requires massive amounts of data, take precedence over personal privacy? These questions, and many more will be discussed throughout this course, and we look forward to hearing your thoughts on how to best navigate these tricky privacy waters. Additional Readings 1.4 Module 1 Conclusion
Throughout this module we have considered some of the underlying reasons we have money and financial institutions in the first place, thus helping us to understand the ethical foundations of FinTech innovations. The reality is that money is a societal construct based on trust, and the value ascribed to money is somewhat subjective. As a result, for centuries societies have relied on a shared definition of monetary value, as well as trust in banks, to ensure our money and economies are stable and secure. But unfortunately, throughout history including the years since the financial crisis, some financial institutions have forgotten their important role in society, and have breached that foundation of trust. This has led many to embrace non-traditional FinTech innovations as a way to democratize finance, and potentially move away from traditional financial industry players. Both finance and FinTech companies need to keep this in mind, and ensure that their innovations provide the highest level of societal trust possible. By walking through the Wells Fargo case, we have introduced each of the key ethical principles that will be highlighted throughout this course. Once again, those principles are: trust, proximity, accountability, cultural lag, and privacy. Keep those in mind as we proceed throughout the course. Next, in module 2 we will introduce a technology that most of you have heard of before: blockchain. While many are excited about the many efficient and cost-saving uses of blockchain, others have highlighted its use in facilitating illegal activities. Let’s consider both of these together, to hopefully ensure the use of blockchain will be ethical, and will help lead us toward a more utopian society. Module 1 Roundup
– Hi everybody. Welcome to the weekly wrap-up where we discuss various course related matters. First, we want to give a huge thank you to everyone who’s participated in the course so far. We’ve had a really great response and we’re so happy for all the amazing comments on the discussion forums. – Yeah, the response has been great and currently there are 4777 of you enrolled in the course from 154 countries or regions around the world. Which is great. We’re so thrilled to see many parts of the world represented and are especially grateful for those of you who are including specific examples from your home countries and cultures in the discussion forums. Finance, fintech and tech disruptions are affecting various parts of the world in different ways. So it’s great to hear local perspectives on everything we’re doing. – Yeah. In response to the poll questions, it was great to see that most of you believe an hour of education is worth more than 100 eggs. So I guess that means that you value education, which bodes well for our future as educators. But we also think it’s cool that most of you wanted the whole chicken. It shows you’re savvy negotiators. – Now we also thought it was interesting that the majority of you out there still trusts banks over fintech startups and techfins but 28% of you out there don’t trust any of them. Now that’s a really important statistic that we hope you explore personally and in the discussion forums as the course continues. You know its incredibly important that financial service firms do a better job establishing trust in the market place. And if you all out there are representative of the market, then it’s clear many people around the world do not trust financial firms either. – And speaking of trusting banks, many of you commented that you trust banks more because they are better regulated. And your deposits are insured. Now while we agree and think that those are really reasonable ideas, it does make us wonder. Isn’t one of the major focuses of fintech innovation to avoid regulation and government intervention? And isn’t regulation one of the things that makes banks inefficient and hard to deal with in the first place? Now we look forward to hearing your feedback as the course moves on to see how the fintech and techfin firms can continue to build trust while maintaining their efficiencies. – Now when thinking about proximity, 77% of you said that you would pull the lever to divert the trolley from hitting the five people instead killing the one person. But on the other hand, 61% of you said that you would not push the man off the bridge to save the five people. These results largely mirror what academics have found as they researched these questions in the past. This is a great example of how proximity can alter our decision-making. – And you’ve provided a lot of really great examples of proximity affecting our behaviour in real life. Now some of you mentioned how we’d be willing to donate our organs to save a loved one, but maybe not do the same thing for strangers. Or how we often donate to our churches or charities in our local communities, but then don’t do the same thing for problems that are affecting people far away. And one of you mentioned about stealing from a bank versus cyber crime. Now we’re gonna talk about issues of bank theft in the next couple of modules, so remember that one for later. – Now, concerning the disruption that will come from tech innovation. A full 60% of you said that you were either very or somewhat concerned that the disruption and the resulting job loss would create broad social problems. And interestingly when asked what industry you thought were the most at risk of disruption, 41% said accounting and auditing, while 34% said finance. By far the two most common answers. Is that because most of you are from the accounting and or financing industry? And therefor you see the risk of automation in those industries? Or is this just an overall observation? – Yeah, I was wondering about that. Now either way, we hope that you’re all thinking more deeply about your life and your career and how you can kind of future-proof yourself. Meaning planning ahead to ensure that you’re not made redundant and are always employed in doing something that you love and that’s meaningful for society. – Now, we also appreciated all the great comments about cultural lag. We know that this is probably a new concept for many of you, but basically cultural lag means that technology adapts and changes faster than culture. Especially in areas like law and religion. And as a result sometimes technology changes so fast that it takes a while for us to realise the negative impacts that technology may be having on society. – And you provided really good examples of cultural lag in everyday life. Including the impacts of social media on teenagers, especially from cyber bullying which is kind of a sad but really relevant thing. The spreading of fake news and false stories via social media which is made possible because of the omnipresence of smartphone technology. And the environmental costs of mining Bitcoin and that’s one of the topics that we’re gonna discuss in the next module. – Finally we also learned that students out there all over the world are smart and disciplined with their money. When asked what you would do with a million dollars you all answered really responsibly. We were expecting some really crazy answers like trips to Las Vegas or buying an island or something but most of you talked about paying off debt, investing for the future, particularly kids’ schooling and helping the community even. Now, in our next module, module two, we’re gonna look at blockchain and it’s governance. Now we won’t cover blockchain in too much technical detail in the next module, but more look at some of the policy implications and the governance implications of what blockchain technology means to us. We’re gonna look at a few interesting cases including the Silk Road which is a really fascinating case almost straight out of a movie. Additionally we will look at some technical details of things related to like smart contracts and what that means. We’ll also look at form remittances and how blockchain comes in and plays a significant role in transferring money from place to place. Now there are obviously technical details around that. Now we won’t go into specific coding or anything like that or crypto-currency or anything along those lines. But we’re gonna look at some real-world implications of this new technology and how it really impacts people. And hopefully in some ways make improves their lives. – Okay, so you’re not gonna be telling them what crypto-currencies to buy? – Well unfortunately we don’t have that knowledge. If we did, we probably wouldn’t be sitting here and earning professor, teacher salaries from the university. All we can say is make sure you do your research. – Yeah and as the course moves on, we hope that you all still really maintain your engagement and especially within the discussion forums. One of the things that we’re super excited to see were all the comments from all over the world. We had comments from those in Kenya, all over South America. People from Europe, North America and obviously here within the region of Asia where we are. It was super cool to see your comments especially in terms of the very personal and kinda specific way that these things apply within your local communities. Within your specific jobs and careers. And hopefully in the way that you think these are gonna be disrupting and changing the way those things operate. Your communities operate. Your career, your industries operate in the future. So please stay engaged and we’re really looking forward to more. – Yeah and I’ll go with what David Bishop has said, some of the feedback we’ve gotten has been fabulous and– – Really excellent, very thoughtful comments. – The comments have been great, which we appreciate and we, at some point we do get to all of them. One of us will look at all of them and we’ll read them and try to comment where we can. Additionally some of you have told us that you’ve finished the module, you thought it was great and you’ve passed it on to your friends and colleagues and acquaintances. So if you feel like this is really compelling information and things that we’re sharing and questions that you feel are important to think about and consider, please do I introduce it to other people around you and I think collectively we can have increased quality and quantity of the discussions because these are really important questions. – Yeah, one last point. Because I have been reached out, so as we kind of, you’re gonna learn later on. This course talks a lot about migrant workers. So I’ve actually been contacted by literally hundreds of foreign domestic workers here in Hong Kong, mostly from the Philippines and we wanna say thank you for joining us and say don’t worry if you’re not a finance expert. We’re super excited to have people from developing countries all over Asia and really all over the world that are looking into these insights and really kind of engaging in these topics and conversations with us. So if you don’t understand everything, that’s totally okay. Within the discussion forums it’s absolutely appropriate to ask questions, right? And really kind of engage with us in that level because we really hope that all people and all communities can kind of benefit from these insights going forward. – Absolutely, because one of the key components I think of our course about this idea of the ethics of new technologies, the risk of new technologies is that it should be inclusive, right? – Yeah. – A part of our mission of doing this course is to create a greater level of financial literacy as well as digital literacy around some of these new trends. That people have the right questions and think about kind of the right factors to weigh as we go about in this kind of aspect of the new world and technological future that we have. – Yeah, especially because it kinda gives the wrong impression that we actually know the answers. (laughing) I mean the reality is these are very complicated very complex problems that really haven’t even fully developed yet and so again this is meant to be a broad conversation and so we really appreciate all people of all levels joining in and kinda sharing your insights. Asking your questions and we’ll be asking them too and we’re very sincere in that because we don’t have the answers, but we hope that society and our community within the course kinda develops some insights in some of these questions together. Okay, so speaking of the great discussions we’ve had. We did want to give a shout-out to a few of the people that are posting, that are posting because we really appreciated some of their comments. So first of all, from peter-nyc. He had some really good comments and the one we want to talk about is about the way he highlighted kind of the role of or the purpose of a business and kind of the way the society defines success. So he noted that maybe the behaviour of Wells Fargo was to be expected because what we’re kind of taught in business school is that the role of business is to generate profit and especially for the shareholders. And that’s exactly what they did. And more specifically on the kind of personal side he said that for many people the definition of success is really largely kinda tied to their paycheck basically. – Yeah. – Right? So what were some of these thoughts– – Yeah, so I thought Peter’s comment or peter-nyc’s comments were super insightful but I think on the first point about what the purpose of businesses is as it relates to profitability, that has kind of entered into kind of academia and filtered into mainstream business roughly the last three to four decades. – Late 70s. – Yeah, as kind of like, as the norm. And if you didn’t do that, somehow that was incorrect and actually that’s not true and I think both of us teach that in our normal classes that we teach in business ethics and things like that. That actually outside of a few circumstances actually profitability, profit maximisation is actually not a legal requirement in many situations and in terms of schools of thought beyond kind of traditional Euro Chicago Milton Friedman-esque profit maximizations there are actually other schools of thought which I think Peter in his comments, he seems to be much more heavily influenced by management gurus like Peter Drucker who kind of subscribes to who kind of advanced some of the things I think Peter is discussing. So I think that’s important to point out that profit maximisation is not the default and doesn’t have to be an increase in just being a shift away from that. I think to the second point about how we value ourselves. – Yeah. – Is it based on a paycheck? I think that Peter rightly pointed out, peter-nyc rightly points out, that’s faulty logic too and if you are always– – Or at least it relates to maybe difficult unintended consequences. – Yeah, I mean and if you’re always measuring yourself using that as the metric, you’ll always be behind. – Yeah. – For that’s the one thing. You’ll never be happy or satisfied. But we also know of psychologically speaking that money actually doesn’t bring happiness. Now for sure, if you don’t have money, enough to cover kind of the needs that you have on a day-to-day basis, we also know that will make you unhappy psychologically. But we know that once you move beyond that, the more money you make doesn’t necessarily make you happier. – And we both know enough wealthy people, I think, to say pretty definitively that the size of your checking account doesn’t necessarily mean you earned it or you’re smarter– – Sure, sure. – Than everybody else right? – In many respects you may have been luckier than everybody else. – Yeah, yeah. A lot of it is timing and what not. So typically what I tell my students when we’re teaching business ethics is that the coolest thing about the concept of success is that it’s a completely subjective term and you get to define it. And so one of the things that we want to do within this course, the reason really I think, the underlying reason why we decided to do this anyway is because we wanted to kind of highlight a different definition of success and have people, especially in the financial and kinda tech industries, think ahead and say okay what type of future do we want and how are we going to define success? And I think peter-nyc, one of the things that’s really encouraging is to see so many leaders in the financial industry really also leading now in this kind of fight against short-termism and fighting against this idea that all you should be doing is looking at the quarterly earnings through the statements and stuff and really pushing towards more long-term investment. So you’ve got leaders at BlackRock and JP Morgan and Goldman Sachs and other places that are really, actually including Warren Buffet himself, saying that focusing only on short-term profits is probably a losing battle and it’s something that we need to move away from. – Yeah, and I think what’s also interesting some of the follow-up comments or in the follow-up comments that peter-nyc had, he talked about culture a little bit which I think is really important. I think both at the individual level but how we measure ourselves. What’s success? That’s all part of like the cultural mantra that you want to have for your personal life. But also, I think he was talking about more organisations banks and things and other institutions. The culture that you have and he shared an article where it talked about students who went to Harvard Business School. – Yeah. – Their idea was, I want to go change the world, but they come out wanting to work at an investment bank. – Yeah. – Having worked at an investment bank– – Our students. – Yeah, having worked at an investment bank, there’s nothing necessarily wrong with working at an investment bank and that’s fine, but the idea of how being in a particular culture then changes what seems to be important to you, what your aspirations are. I think that’s important to understand because I think, if you have certain values, you wanna put yourself in the culture, or create the culture around you and within you to ensure that you can live true to those values. Which I think is quite important. – Yeah, so that means peter-nyc, if you are from NYC, if you’re from New York City, which is one of the world’s other great financial centres, then share this course with your friends and colleagues and lets start changing the narrative and having people define success differently and start looking at the role of financial institutions and all companies in a more ethical and kinda morally centred way. – Thanks Peter. We’ve got another comment from JoergHK and hopefully that means you’re in Hong Kong, which is so great. But the comment was in response to the question about who is responsible for the negative consequences basically of tech innovation? And JoergHK talked about perhaps that the government, the burden of that responsibility lies on the government to find policies and fund policies through taxation to deal with the negative externalities related to these consequences of technological innovation. So that raised a very interesting debate I think on the discussion and this is something we frequently discuss between ourselves and within our classes here at The University of Hong Kong about who is responsible for potentially the displacement of labour, people who can’t find jobs. Technological advances that may make certain companies obsolete. Who are responsible for those things because on a wide scale those things will happen and already have been happening. And again, that created a bit of a discussion and we have Jessielam2018 username responded. Maybe the whole obligation didn’t actually fall on the government, but there are other stakeholders who should get involved. So that’s a great place to start with this very important question and David, what do you think? – It’s super complicated and this is really at the crux of what we want to talk about for the whole course. And so again really appreciate the kind of sophisticated dialogue, very very kind and thoughtful dialogue as well. I think one of the things we’re trying to do is to foster a place where people can disagree in a respectful way and we appreciate you for doing that. So on the one hand you have the role of the government and Joerg also mentioned, JoergHK also mentioned, statements within the World Economic Forum and the idea that, yes people are going to be displaced. – Yeah. – Right? And so there are going to be a lot of jobs that are new, invented. But there’re gonna be a lot of people who are not really able to move or upgrade within the workforce. And we’re seeing that across the world, right? So again, I’m from the US. I’m from an area where they had a lot of manufacturing, even just a decade or two ago. In fact, I used to work, you may be surprised to hear, I used to work– – That you used to work? – Yeah yeah. – That is surprising. – I used to work in a woodworking factory. So I worked in a factory in rural Southern Georgia with kinda salt of the earth, normal people and the reality is that with automation, those are the types of jobs that over the past 20 years have largely gone away. And now– – Or, moved to other countries– – Well, sure, sure, excuse me. They’ve left my area of the US, so you see a lot of areas where they used to have a lot of manufacturing work that is now gone, so in the Rust Belt for example. Ohio, Pennsylvania, etcetera. And that has had a massive influence on politics and so many aspects of everyday life. Including social ramifications such as the opiate crises which is now going on. So you can see this is like a domino effect of negative consequences and so that’s why it’s so important for us to think ahead and some of the comments that everyone made was that this is not just restricted now to manufacturing or previous to that, agriculture, right? So now we’re seeing that people are saying well, we think that finance is gonna be disrupted, we think that accounting and auditing are gonna be disrupted. We’re former lawyers, we think that the law– – The law will be– – Yeah, legal field is gonna be disrupted. And so the question is, how do we kind of reeducate and reintegrate workers back into society? I do see Jessielam2018’s point though. This is a problem. The government is typically reactive, right? – Yeah. – And often can’t be proactive in these areas and so we do need other aspects of society and from the government’s standpoint we need more proactive kind of positive incentives as well. So she mentioned tax policy for example. So you can have taxes, tax policy that encourages people to donate to charities for example. Or maybe we can create tax policies that encourage more innovation or job creation and other things. – Yeah and I think one thing that is important to point out is that this trend that David Bishop just described and that was described by JoergHK and followed on by Jessielam in the comments and discussion and by others, is that this trend of displacement of labour is actually not new. – Hmm. – This is something that we talk about a lot because of technology. Maybe it’s because that the pace of that disruption is increasing. – Right. – But even if we look back 20 or 30 years even and we look at sources of, locations of manufacturing of like electronic goods or garments or shoes even, we saw this move from, it was in Japan, then moved over to places like Taiwan and Korea. – Including here in Hong Kong. – And in Hong Kong. Then moved over to mainland China. And mainland China was the place for much of that manufacturing was of low-cost, somewhat skilled labour and that was a nice combination but as that skilled labour started to creep up in expenses, then much of that manufacture has moved either inward into other parts of China or actually to parts of South East Asia as well. – Right. – And so this pattern of displacement is not necessarily just driven by technology alone, obviously driven by cost and those combinations I think are intersecting now to make that pace of change a bit faster. Perhaps what we’ve seen and so one thing that we I think need to do as we continue this process is to place them in the proper historical context and understand that often that we like to think the situation we’re at, at that our point of history, is quite unique and frequently there could be some unique aspects of it, but most of the time something like that has probably happened in the past. Or you know kind of putting it in the proper context is helpful and I think the thing more practically speaking related to policies, is that I think if we rely on governments alone to solve the problem, the problem kind of the issue that we run into, is that there’s usually a timeline mismatch. So particularly for governments and politicians where they’re elected into office, their kind of view or their perspective of time is basically the time they’re gonna be in office until the time they’re trying to get reelected. Whereas if you, the labourer who is getting displaced, your perspective may be very different. And so if you rely solely on government to solve those problems, to be frank, that could be a bit precarious to be fair as well. And so in that capacity then who is involved? There is the government, obviously. There is the individual, but should industry be part of that process? Of course. I definitely think so. Should educational institutions like us? Of course. I think there’s a very healthy debate that could be had about there are traditional research-based universities like we’re at, The University of Hong Kong. – Right. – And they do serve their purposes in certain ways but do all institutions of kind of tertiary educations have to be like us? I don’t necessarily think so. – Right. – Could more of them be tailored to helping re-school, re-skill people who potentially are at the risk of being displaced? For sure. And I think we need to think about that and then how certain tax policies, tax credits, government policies, government credits could be applied to make that more effective. – And there are a lot of government resources that are being utilised for that purpose now. – At least in certain countries. – Schools in South Korea for example because there aren’t as many children in primary and secondary schools. They’re actually bringing in the elderly from the countryside to come in and learn how to read and actually so they can keep those resources being fully utilised. And so here at The University of Hong Kong, my colleagues and I actually created a weekend programme called Empower You, where we bring in migrant workers, foreign domestic workers, mostly from the Philippines. And they come and use the resources and are taught by professors and company leaders and stuff in areas to help them improve their skills as well. So I definitely agree with David Lee, this is not necessarily anything that is new, however, I do hope that there is one thing that is different from before, like in all aspects of society and all aspects of learning, our hope is that we can learn from the mistakes of the past and kinda forge together a better version of the future this time around. So if we are going to make a difference, if we are gonna have a better future, this means looking back at the last 30, 40 years. Understanding what went well, what we can change. So that as this new wave, this new fourth industrial revolution kind of sets in, we will have a plan in place and society doesn’t have to reap these kind of negative unintended consequences. You know one of the discussion that I found most interesting, probably ’cause it kinda hearkened back to my legal training, was the conversation about accountability and especially who should be accountable for harmful, negative, maybe even untruthful things that are posted online. Should it be the person who posts them? Should it be the platform or some combination of the two? And a lot of you kind of chimed in and said that it is the poster’s fault and the poster should be accountable for anything that is harmful. Anything that is untruthful. And so we had one comment from a user and I’m not sure how to pronounce your username. Xeilani or something? X-E-I-L-A-N-I. Really really insightful, kinda long comment that created a nice dialogue. Really focusing on how it is the person who makes the comment that it should be responsible for those words, especially if it’s untruthful. And how there, obviously the platform was also responsible for having kind of a range of tools to make sure that you know comments are being read and analysed properly. But really at the end of the day it’s kind of up to us. And so my comments back was, well totally makes sense and actually regulation which is required is probably enough or more regulation is inevitable. But one of the things that I was wondering was, who is it that we trust to define harmful, or even untruthful, right? Because unfortunately truth is often in the eye of the beholder and it’s often very difficult to kinda decide. And it kind of spurred a nice conversation. So what, where do you fall on this? I mean, there’s no answer and there’s certainly no right or wrong answer yet, but in terms of this conversation of like increased regulation and kind of punishing the people that put harmful things up there versus the platforms, I mean what are some of your thoughts on this? – Yeah, so I think kind of big picture. Like if we look at the kind of the ecosystem so to speak of the relevant parties involved, you know there’s obviously the content producer. There’s the platform. There’s the viewer. – Yeah. – Should that responsibility portion kinda amongst those main parties, or were there other parties that we’re forgetting about that should also be included. So I think, maybe that’s one of the kind of foundational questions you start with in terms of comp. If we look back at different forms of media publication, frequently it was the platform that had more responsibility for the content they put out. – And largely– – The difference is– – Well go ahead. – Here we go right? This is the difference, they’re screened. But they can still– – Well, they’re screened and publication was very very hard. Right? So there were only like a handful of newspapers, magazines whatever right? – And if you were an entry-level reporter a generation ago, maybe you started off as a fact checker. – Yeah. – Right? And so this is why there was credibility associated with many publications that were kind of well-known newspapers or well-known kind of magazines, current events magazines. Because people had a belief or faith, and they had a system that kinda filtered out the things that were incorrect. – Yeah. – And if– – And you had limited circulations so you had to be truthful because if you lose the consumer’s trust, you lose their business. – And your advertising business as well, right? And then, and part of that too is if they did find a problem, they would clearly correct it. They would say this is a correction, this was incorrect in the last article or something like that. And there was that kind of process that people felt like these were the norms of information kind of sharing and media. I think now with the platforms that we have that’s not the case obviously, because you can produce something and it will kind of basically go through no screening on a lot of platforms and will just be posted. – Yeah. – Right? And then, which is a very different kind of issue and then if there are inaccuracies there, then that process of rolling that back is a little bit more difficult than we had in the past. Now there are countries I think as we mentioned later in some of the modules that are trying to address this through legislation. Singapore is one of these countries. United Kingdom is one of these countries that are trying to introduce legislation to make platforms more responsible. Particularly for the veracity or truthfulness of the things that are posted. And those are countries that have decided to go in a particular direction to make more platforms more responsible. It remains to be seen how that plays out. How it’s implemented. Some of those laws are just planned but they haven’t passed any sort of legislation yet. And so that, one that are of interest, but two I think the one of the broader issues is for things like a lot of platforms, in terms of where they’re hosted, where some of it’s being stored. You know there’s definitely a multi-country a multi-jurisdictional component of it. – Yeah. – Just because you get one country on board doesn’t actually mean you can solve the whole problem. – Yeah. – That’s another problem. – So what you’re getting to is the kind of Lego concept. We talked about accountability but let’s be more specific about liability, right? This is where it really gets challenging but let’s say that Xeilani or however you say your username, let’s say that it should be the poster who is ultimately liable for what he or she or it, if it’s a company or bot or something, posts, right? What if that person is located in a different country? Which is very possible if not likely. Do you have zero recourse in terms of finding and then actually suing that person? Right? So this is actually really reminiscent of a legal discussion that went on actually in the 1970s tied to manufacturing very much like what we’ve just talked about a moment ago. When manufacturing was done in one country, let’s say the US, you’re manufacturing for US consumers, it’s really easy if they manufacture something incorrectly and someone is harmed by it, you just go after the manufacturer. But then what happened was, all the manufacturing in the US started going abroad, overseas, right? And so then all of a sudden, if you buy a toy or if you get a toy in a McDonald’s Happy Meal that ends up, like unfortunately harming your child, let’s say. You’d have to find the factory in that overseas country and actually go after them directly. That was almost impossible which meant that product liability as an entire legal concept, essentially became useless. And so then the result is, what they, they created something called Strict Liability, which means no fault liability. And this is complicated but basically, let me summarise by saying, the point of these laws is to ensure, if possible, that the injury doesn’t occur in the first place, but then if it does, then the person who is most in the best position to make sure it doesn’t happen, actually kinda provides compensation to make the injured party whole, okay? So what that means is, what if you get some random person that you cannot identify, who is on some island somewhere that’s making false or fake comments, who is in the best position to ensure that doesn’t happen? Maybe it’s that individual, maybe it’s Google, maybe it’s Facebook, I don’t really know. But here’s the other side of the coin though and this is where it gets really really challenging. Because when you talk about restricting speech you are unintentionally also limiting political speech. Religious speech etcetera. And this is actually playing out right now. So you have, because these large platforms, including YouTube, Facebook etcetera they’re really concerned about fake news, hate speech and these things that are kinda creating big losses for them, they’re now starting to pull off people from their platforms, including a disproportionate amount of very conservative political people that’s begun political things. Commentators. And so now you have people that sit on that end of the political spectrum are saying, wait a minute, this is discrimination, right? How is it that our ideas are somehow harmful? How come our ideas? And again that’s easy for us to say when we disagree with those ideas but what if it’s your religion? What if it’s your political ideas that are being suppressed because you’re no longer able to get on that platform? These are very real concerns that on the one hand we want to make sure that these false statements are addressed but at the same time we also have to understand that every time we limit speech, we’re limiting one of the most fundamental rights that people have. It’s very challenging. – Yeah and I think that’s a great point and it raises a connected question about do people have right, they have a right to speech in most countries. – Right. – Or a lot of countries that we deal with. But do they have a right to that platform to express their speech? – It’s a valid platform, yeah. – It raises other questions about you know kind of access and at a certain point access to certain digital platforms is that a right? Right now we generally say no, but like in this part of the debate it becomes an interesting question. I think if we go through some of the discussion points in response to the, what Xeili first brought up, there’s some really interesting points. I think peter-nyc had another great comment about this idea of how we design some of these platforms– – Yeah. – Which we thought was really insightful. If you go back and look at some of the early historical narrative around some of the wide-spread social media apps that most of us are familiar with, in a lot of situations those founders in their initial kind of creators of this, actually didn’t have a deep understanding of what this would become, right? And so, on one hand, how do you kind of a model for oh we’re gonna have this kind of impact. Because at a certain point when you’re just a handful of friends starting something that you think over, we’re gonna be able to influence three billion people in the world, that’s pretty– – Yeah. – Arrogant and in some ways crazy, right? – Yeah. – One or the other. And so how do you model for that? That becomes a difficult question. But then once you start approaching it from, should you be doing something for that, right? And that again raises an interesting and insightful question about this idea of how do you create the right structure of, for these platforms to police themselves? – Yeah and one of the things that we’re gonna talk about later on is access to algorithms. So as machine learning, AI and these other things become more prevalent, I mean we’re using these things every day, whether you realise it or not. But as they become more prevalent we have to understand these are literal black boxes where we can’t see the algorithm. I watched, do not do this, it’s a waste of time. I watched a 23 minute video today from a prominent YouTuber about the YouTube algorithm and how difficult it is as a YouTuber to understand that algorithm and to create viral content. But what he was saying is that the reason why YouTube is getting more and more clickbait-y as he said and why it’s more relying on the thumbnails more and more, which is why you’re seeing a lot of maybe irreverent pictures or things that really cause people to click on them is because the way that they created the algorithm, it really– – To reward that. – It really rewards the click-through rate. CTR. The click-through rate. And so he was saying that you have to, as a creator, you can’t be as thoughtful about those things because it kinda driving people in that direction based on that algorithm. Now we can’t see that algorithm. Right? This is one of those things. And so the question going forward is should we be able to? Should this be a public good? These are some of the questions that we’re going to address into the future. And so I think as we go forward again, really really appreciate these very thoughtful comments because these are the broad questions, these are the new social goods. These are the new commodities. And so as we move forward, as we think about how these companies should be operating, we really need to think through. Now one last point, sorry I know this is already running long. One last point is, when we talk about these billionaires that own and operate these massive technology platforms, one thing to consider which is totally new in this landscape, is that because of what is known as weighted voting rights, you now have people like Mark Zuckerberg who do not own a majority of the shares in their company anymore. So he does not own a majority of Facebook and yet he has almost complete control over what Facebook does, because every one of his shares has 10 voting rights versus if you own Facebook, you only get one voting right. – So different class of shares. – Different class of shares. And so as a result of that you have guys like Jeff Bezos, or the people that own Alibaba that have an immense amount of power and control not only over your data and privacy which we’re gonna talk about going forward, but literally the news content that we read every day. Right? The types of products that we see and buy. The type of news, there’s so many things. – And ironically I guess, the historical, at least the modern historical genesis of modern kind of weighted voting or multi-share class kind of structures was to give, allow media companies to give them a little bit more of editorial independence– – Yeah protection from shareholders. – Protection from being influenced from being, oh I don’t like you sharing this kind of truth and so allowing them to insulate themselves from that and now it’s maybe a little– – It’s gone full circle. – Kind of a little backwards. That’s interesting. I think the other thing that we’ve talked about in the past but which is incredibly relevant to what you’re talking about now in terms of the pervasiveness of some of these social media platforms, how they influence us. And we both use them a lot ourselves and so we’re not saying that by default they’re evil, but more that we should just be aware. I think, in my own world working with some technology companies and start-ups, it’s incredible and even if you talk to people who are on large social media platforms or look at how they describe the platform design, it’s intentionally created so people stay on it, right? – Yeah. – The user interface is very much a combination of human behaviour psychology as well as kind of graphic design and other things to make sure people stay on it and a lot of people are putting together these platforms now as part of their design is what can we do to somewhat manipulate people to stay on it. And this is part of the business model. – Yeah. – So this is important for us as consumers to be aware of that. – Yeah. Okay, if you get us talking we’ll talk forever. So we’re gonna end here, but we hope to see you in the blockchain modules so we can talk about how some of these new emerging technologies are gonna be impacting our lives, for good, maybe a little bit for the negative and how you as an innovator, as someone in the finance industry or just an interested party can utilise these technologies for your own life and for your own career. Module 2 Blockchain and Its Governance Module 2 Introduction
Hi and welcome back to Module 2! Thanks for sticking with us. We promise it’s only going to get more interesting! In this module we are going to talk about blockchain, which is really one of the key catalysts for the rise of fintech. Now a few upfront caveats: The focus of this module is NOT which cryptocurrency you should invest in and if you have followed cryptocurrency markets, it has been particularly volatile, so as the cryptocurrency enthusiasts like to say, “Hodl”—hold on for dear life. Frankly, we don’t know which cryptocurrency you should invest your life savings in, so please don’t ask, and if we did know, honestly, we probably wouldn’t be doing this course, we’d be at a warm beach. So another caveat is that this module will also not discuss initial coin offerings- ICOs, IICOs, STOs, or any of the variants by which someone might try to fundraise or monetize for their blockchain project. Don’t get us wrong, these mechanisms are all interesting, but there is so much information to cover, it could easily be its own course, and because of changing laws and regulations in different jurisdictions, it’s difficult to explain in a snapshot format since the regulatory landscape is constantly changing. Maybe most importantly, though we are both lawyers, we’re not your lawyers, so if this is something you are thinking about doing as part of a blockchain project, please speak with your lawyer. Now given what we just said, the focus of our module is more about questions that might be good to consider as blockchain technologies become more pervasive. Really what are blockchain’s implications–both the wonderful disruptive possibilities that it represents as well as potential issues we should consider before completely embracing it. You’ll find we won’t focus too much on blockchain’s technical details in this module, basically for two reasons: 1) because blockchain and its applications continue to grow so rapidly, things will likely have advanced a bit between the time we prepared this module to the time you end up watching this; and more importantly; 2) the next course in the Fintech Certificate, which our Fintech Ethics course is also a part of, is entirely focused on blockchain and is taught by one a wonderful colleague of ours from the Faculty of Engineering at the University of Hong Kong, who is a real technical expert in the space. So if you find yourself with an increased interest in blockchain, please be sure to register for the next course, “Blockchain and Fintech”. Ad-libbed: So one quick question for you as we keep using these terms interchangeably; what is the difference between blockchain and cryptocurrency? That’s a great question David, and I think a lot of people sometimes use those interchangeably. Effectively, cryptocurrency is one of the outputs of the blockchain. So, as people mine – and we’ll talk about some of this vocabulary in a little bit in our course – as computers that are part of a blockchain network mine and solve problems to basically build on to additional blocks in the blockchain, coins are produced and part of that is to incentivize these miners to do the activity. So keep that in mind as we go through. So, sometimes the blockchain, which is a distributed ledger network can be used for all kinds of things; including tracking, certain types of goods or services, even people, whereas cryptocurrencies and various forms of cryptocurrency are specific for new forms of payment. 2.1.1 What Is Blockchain Technology?
In its most basic form, a blockchain is a distributed ledger, essentially a series of digital records, referred to as blocks, which are connected together forming a chain of records, hence blockchain. Instead of this data being kept in a single place though, the information is replicated and distributed across a peer to peer network of computers. The network collaborates together to confirm if new blocks of data can be added to the chain, which makes it difficult for a single member of the network to add incorrect information. Additionally, such a decentralised nature also makes the blockchain difficult to modify thus preventing tampering. Though a number of folks had researched and thought about blockchain and many of the cryptographic technologies that underpin it before, the concept of the blockchain and its offshoot – cryptocurrencies, really entered into the public domain after a white paper was published in 2008 by Satoshi Nakomoto, titled: Bitcoin: A Peer to Peer Electronic Cash System. Now, the paper described bringing together various technologies and cryptographic methods to form the Bitcoin protocol and has gone on to serve as a framework for many of the subsequent blockchain related advances in the FinTech space. So who is Satoshi Nakomoto? Though there has been a lot of speculation, Satoshi Nakomoto is a pseudonym, and the general public really does not know, at least not yet, the identity of this person, or if Mr. Nakomoto is a single person or perhaps even a group of people. And even if we never figure out who Satoshi Nakomoto is, there is a real possibility that history will look back on his 2008 white paper as a seminal moment that fundamentally changed the course of history, or at least financial history. 2.1.2 How Is Blockchain Governed?
When considering FinTech governance, especially for blockchain technologies, is lack of regulation a pro or a con? Blockchains are effectively regulated like industry groups, or even members only clubs. And the mechanism for governance is generally based on the principle of majority rule. But is majority rule always right? Now this is like straight back to the Greece, right? But the reality is that most modern democracies are not actually direct democracies where the simple majority always wins and governs. So this is why we think that Bitcoin and blockchain are simultaneously so appealing, and yet so threatening. Because of the one-person one-vote system idea is basically built into the code. And so whoever controls the majority, they also get to rewrite the rules. And your identity is typically quite anonymous, so it’s difficult to identify who the other actors are. And so these principles raise a whole host of interesting issues. Because as you think about particular blockchain protocols be it Bitcoins, Ethereum or other forms of widespread protocols that are gaining more and more types of different use cases, we could easily imagine a situation, where a particular protocol application becomes so widespread, and affects so many other people. Do we want that to be governed by the members who have the coins who can vote or should that be regulated on a more national or even international level? What process would you trust more? Now we’re not advocating that blockchain should be governed at a more national or international level, or have greater regulatory scrutiny per se, but it just raises the question: as these technologies are becoming more pervasive, is the current governance structure the way we want to deal with that? Especially if it is going to impact so many other people who are not necessarily part of the “member system”. If you consider voting from a corporate governance perspective, the concept of majority voting otherwise characterized as one share, one vote has long been the general rule. But while things definitely started that way the reality is that a whole host of diverse voting mechanisms have been adopted to ensure proper governance. For example, supermajority voting has been legally built into many aspects of the corporate world. An example of this would be a special resolution to change the name or nature of a company which would require a supermajority of the shareholder votes. Beyond that basic democratic majority or super majority voting rule is not always the most efficient way to decide something. Now we have things like accumulative voting or other different methods where like a minority shareholder or a voter could have a stronger influence or a voice on a particular matter. So if we apply this back to blockchain and cryptocurrencies at their genesis, we need to consider what the best way is, for us to manage them. Should there be a more comprehensive type of voting or control structure? Or do we really want a simple majority rule, and just give power to the people? These are the type of questions that are going to take some time to answer. We talked about governance and how some of these protocols are governed by users, and fundamentally we have to remember that blockchain seeks consensus first and not necessarily fairness or efficiency. And that could be right or wrong, it’s something we’ll have to consider in the future. But will blockchain and its uses create greater inequality in the long run? And if we jump ahead, will people that are already left behind be further left behind? One of the novel uses of blockchain is coupling it with something called a smart contract, which are not really smart and may not always actually even be a contract. So now that you’re probably confused, let’s talk about it. Additional Readings 2.2.1 What Is a Smart Contract?
The term “Smart Contract” sounds really exciting and futuristic, right? But hold your excitement, because the current form of smart contracts are neither smart nor even contracts. Computer scientist Nick Szabo, an influential figure in the blockchain and cryptocurrency world, is credited with initially coining the phrase “smart contract” as early as the mid-1990s. A smart contract is simply a computer protocol, really some lines of code that automatically execute a specified action, like releasing a payment, when certain conditions are fulfilled. So this code might represent an aspect of a contract, but the code itself is not actually a contract. Additionally, it’s not smart because a person still needs to think of the terms that will be represented by the code. So someone like a lawyer is still needed to think through and negotiate the terms to be coded. So if these smart contracts are actually not smart nor contracts, why are they so special? To answer that question, imagine you are cleaning out your room and find a tennis racquet you never used and now want to sell. You go online and are able to find a buyer, say David, that lives nearby. You set-up a meeting and show David the tennis racquet. David confirms his interest and then gives you the money and you hand over the tennis racquet. In this example, there is minimal risk that David will be able to run-off with the racquet without paying you. But let’s imagine the same situation except you live far away from each other, so you aren’t able to meet, do you feel comfortable sending the racquet through the mail and trusting David to pay you? Now this type of risk is usually less of an issue when dealing with large companies, like when you order a t-shirt from your favorite brand’s online store, or people you may have repeat transactions with, but for one-off situations or large, complicated transactions, like a home purchase, there can be some uncertainty about payment, delivery, quality of product, etc. In such a situation then, what if you can find a third-party, say Jon, to take the payment from David before you send the racquet, and you’ll get the payment from David when the racquet is received? Would you feel more comfortable? This is exactly how smart contracts work: using “if something happens then…” or “when something happens then…” type of logic to solve this problem. So in our example, if a specified contractual term, say racquet delivery, had been fulfilled, then the protocol would execute release of payment, thus solving the problem. So how does this relate to blockchain? With blockchain technology, these smart contracts can be stored or embeded on a blockchain, so instead of being visible to only the counterparties that may have a copy of the contract like in a traditional contracting situation, a smart contract is available widely for inspection on the blockchain. In the example of selling your racquet, not only you, David and Jon know about the contract, it is also visible to the bank who processes David’s payment, and the delivery guy who delivers the package, and every other actor that’s involved in this transaction, or has access to the blockchain in general. The distributed nature of the blockchain makes it difficult for a bad actor to not pay, delay payment, manipulate terms, or otherwise deviate from the terms of the original agreement because the terms are recorded across the network and cannot be changed. And once they are fulfilled then payment is self-executing and happens automatically. Which means, when the blockchain tracks that the racket is received, the money will be sent to your account automatically. So what are the benefits of a smart contract? Well some things that maybe come to mind are: One, these things don’t require human interpretation, hence taking out some human error. The reason for that is because they’re self-executing. So there’s no issues with a human doing something incorrectly, as part of processing a contract, or it removes some of the temptation that, maybe someone feels of like: well if I keep my end of the deal then I end up being worst off. So it removes this human temptation issue. Additionally, once a smart contract is coded in, generally it can’t be changed, so it’s immutable. Now because of those factors, this ultimately should save time and money, thus making things more efficient and reducing transactional friction. Additionally, if we tie this back into the tennis racquet example, it removes the need for a third party. You see for lots of transactions historically, a third party has been necessary to hold payment or collateral due to risk related to a lack of trust, which is something we’ve talked about. Now perhaps the most common form of this type of third party is something known as an escrow agent. Now imagine that instead of buying a tennis racquet a US company is trying to purchase a big building in another country, say, China. They do not know each other, and they cannot meet somewhere with a pile of cash to make the payment and sign the deed at the same time. So the two contracting parties may enter into this staring contest of “who’s going to pay first?” or “who’s going to act first?”. In this situation, an escrow agent would serve as the third party, or a middle party: on one hand holding the payment from the US company, and on the other hand holding the signed deed or legal agreement from the building owner. And once the two parties agree to pay and finalize the terms of the transaction, the agent will transfer the money and the deed simultaneously, so ensuring the building owner will get their money, and the building purchaser will receive the legal title and the relevant documents, so they can own the building. As you can see smart contracts would serve the purpose of cutting out the middle party be it Jon in the tennis racquet example, or the escrow agent in a large international real estates transaction. And as we previously discussed, a lot of time and money can be saved by cutting out the middle men. But does that mean smart contracts are great solutions for all contracting relationships or situations? The answer to that is “no”, and we’ll discuss why that is in the next video. But before that, we’d like you to think about a question: what are the situations in which a smart contract may make your life easier? Additional Readings 2.2.2 Applications of Smart Contract
So, as we discussed about smart contracts, we mentioned that smart contracts may not be the solution for every legal problem. Definitely. So why is that? Because I think people think it’s “smart”, it should just evolve and it’ll be okay, but that’s probably not the case. So why is that? Well, so like you mentioned, smart contracts have been around, the concept has been around, since the 90s, and yet, the vast majority of people don’t know what it means or have never actually used one before, because in reality smart contract is really hard. Basically smart contracts are typically a binary solution, “if this then this”, it is really much like computer programming. And, it would be a legal situation if I tick off all these boxes, then you are automatically going to remit the funds or transfer the deed or whatever it is that is the outcome of that, that contract. But if it is not a situation where you can just tick off those boxes and have like you know, “if this then this” type of solutions, which most legal situations are not like that, which we know. Then the smart contract is very, very difficult to include. I think maybe as AI becomes better and machine learning gets better, then maybe it will be able to get on to the periphery and deal with those grey areas a little bit better, but until then, smart contracts are going to be relegated to very simple, very rote types of, “if this then this” type contracts. Interesting, so, I think there’s two things that are really interesting about that. One is, the idea that a smart contract is kind of like an oxymoron, in the fact that it actually is not that smart to be frank. Like an honest lawyer. Just kidding. But you know, secondly, I think the point about that, the applications of smart contracts will probably be very applicable to the routine and mundane. Potentially. Well, not to say not important, but just to say: So if you and I are buying, let’s say I’m buying a building from you, and you are in Seoul, and I am here in Hong Kong – there’s a lot of variables in that. Right, so, I have to do my due diligence, to look at past history, understand potential legislation, I have to look at the foundation, I have to look at utilities, I have to look at mortgages. All these other things. So, typically when you enter into a contract that’s complex like that, it will have conditions precedent and all these things. So really quick Dave, so we understand what that means, but what is a condition precedent? It means like it is a condition that precedes the closing. So if we enter into a contract, we sign it, but I’m not going to give you the money yet – and you are not giving me the deed yet. Instead, we have to go down a list and confirm every single thing has been done. Right, so, I’ll usually get a few months. I’ll look, okay, is the foundation solid? Yes. Do my engineers like it? Yes. Research litigation, is there any litigation history? No. Right, so then after you tick of all those boxes, then you agree to finally give them the funds and you transfer the deed to me. It’s simple, but it is obviously very complicated – because life is complicated. I think it’s interesting that you talk about the idea of the complexity of life. Because I think what we are talking about really is: any time there is some major qualitative assessment that’s necessary, then it’s going to be very difficult for a smart contract to really be applied to that. It’s where those variables are really minimal or not existent – or it’s very vanilla – about “Okay, this is what needs to be done, this is what you need to do”, and those responsibilities are very clearly defined that we can rely on smart contracts. And here’s the interesting thing that a lot of people don’t think about when they think of contracting. You legally have the right to breach. Right so when you enter a contract, and there’s ethics issues in there, and obviously you want people to fulfill the agreement – but you always have the right to back away. Now, there is legal ramifications for that. If you stop paying your mortgage, they can take your house. You could pay a fine. Yeah exactly, pay a fine, whatever, but the point is, if there is some underlying condition where I need to stop paying my mortgage, I have the right to do that. Within a smart contract, you don’t have that option generally speaking because it is again – – upon the conditions being fulfilled, it is self-executing – it executes automatically. Right so when they say smart, what they mean is that it does not require human intervention to execute and fulfill the terms of that agreement. But it’s like a roller coaster. Once you are going down the hill, there’s no pulling back, you’re kinda stuck with that ride. So, there’s a level of commitment that’s required if you go down this route. Which is why I don’t think you are going to see any time soon any type of complex transaction where people are using smart contracts. Everybody wants to be able to get to the end of the line in that roller coaster analogy – they want at the very-last moment, they wanna say: you know what, I don’t want to get on this ride. Even if that means they have to pay a fine. Even if they have to pay a breach fee or something. I need to get off this ride. And I think for a lot of companies and a lot of transactions, they need that. Yeah, I think you’re right. And I think for complex type of transactions, you’re right. I don’t think, the use of smart contracts won’t necessarily proliferate in the near-term at least. But, I do think there’s a wide variety of daily contracting that we just normally do, that could really be applicable perhaps to this. I mean, probably the most complex version that would be just a home purchase to be honest. If you got the right documentation done up-front then you could potentially find a very efficient smart contract to deal with escrow and things like this. But I think it’s an interesting thing that a lot of people, both lawyers and technologists, who are continuing to explore a really important part of this FinTech ecosystem that people are trying to create. Yeah, IoT, right? The Internet of Things. With wearables and things. I can see for example, say health insurance policy, where there’s a smart contract tied to that, where if you exercise a certain amount number of days, or if you use certain things, then your policy comes down. Yeah exactly. If you drive… So they are already doing this with cars, right, they’ll put a device to measure your speed and everything on your car as long as you’re a safe driver, then your insurance premium comes down. A lot of those aren’t officially smart contracts yet, but you could see the method. Totally. You get the big data analytics, you get the AI and machine learning on the backend of that. It makes it very easy for that to be executable. Like a thousand little contracts. Basically. So the lesson I take from that, before we move on is, if and when that happens, and wearables are reporting to my health insurance provider, and that will impact my health insurance premium, and, I will purchase a dog, and we’re gonna put a wearable on the dog and let the dog run around. People are actually doing that already! So there you go. Just kidding. That was a joke! This is the ethics side of it. That was a joke. We also have humour in our modules as well. Thanks. Additional Readings 2.2.3 Implications of Blockchain Technology
Well, blockchain sounds awesome, right? so what’s the problem? Well really no problem per se, but let’s consider some questions: Okay so first, from a business perspective blockchain is just another type of technology, but it’s not a panacea to all business problems. So it’s important that you have the type of business problem that lends itself to a blockchain solution. Now moving beyond that though there are other implications to consider. Blockchain has an impact on the environment, for example. Remember when we mentioned that blockchain is a distributed network, each node on the network is a computer that requires electricity. Each of those computers is engaged in “mining” —effectively solving complicated mathematical problems to add blocks to the chain. These mining rigs require lots of electricity to both run the computers but also for the cooling to prevent the computers from overheating. So I have students that have a spare laptop or computer in their dorm room, and they have downloaded mining software and use electricity in their dorm 24 hours to mine, now, albeit mine very inefficiently, and they think the electricity is free, but of course that comes at a cost. So, for someone who’s layman, someone who’s not a technologist, you keep using the term mining, what does that mean? Basically, computers have to calculate a series of very complicated mathematical problems in order for them to be approved to add an additional block or information to this “blockchain”, and this is a level of security and access, a barrier to access, to prevent people from just adding things ad hoc onto the blockchain. Now, the ramification of this, however, is that takes an incredibly amount of computing power and will continue to take more and more computing power. And this is not just for Bitcoin, which is maybe the oldest or most well-known of the different types of cryptocurrency and blockchains out there. But for all the other different types of blockchain projects that have sprouted up, each of them requires some level of what they call mining. So now we have these huge mining rigs or farms out in random places in the world that – all they do is have these banks of computers that are basically calculating these series of mathematical problems in order to add more and more blocks to whatever blockchain they are working on. Okay so back to the environment, and the implications for that, you may be surprised to learn that to mine Bitcoin, a 2017 estimate stated that mining Bitcoin exceeded the electricity production of 159 other countries individually. So they say 30 Terrawatt hours – whatever it means. It means a lot of electricity. Right. And that’s only for Bitcoin, you can see that the electricity consumption would be much, much higher if it also included the mining of other types of blockchain. I realize this is a FinTech ethics course, so why should we be talking about the environment? So, from my standpoint, this is a really interesting and super, super important point that many of you maybe don’t really think about. When we talk about the implications of these technologies, in this course, or if you’re just reading about them, we are typically talking about the person-to-person kinda transactional cost that they maybe have. So, loss of privacy or access to finance, and those are super, super important. But what we also have to think about, and what we hope that you think about, is the broader social and physical – even geographical – implications of these things. When we include technologies like this, when we introduce these technologies, again, getting back to this concept of cultural lag, the technology has far outpaced our understanding of how to really deal with that technology in our real lives – in terms of its implications for the natural environment. So there’s good and bad examples of this. So, in some places, in Canada for example I’ve read they are taking old abandoned sawmills and lumber industry that has been shut down, or whatever, and they are retrofitting and re-fitting those large facilities into mining farms which – for some people is good – means more jobs, maybe brings income in there. But there’s a lot of negative ramifications as well. So again, noise pollution is very serious, so a lot of people in those communities are complaining about the noise. There’s obviously the electricity consumption. So the vast majority of mining farms are in China, and the vast majority of electricity from China comes from burning coal. And so there are very serious ramifications both now and in the future for things like that. But on the flipside, as culture kinda catches up to the technology, we are also starting to think more creatively about where to put these large institutions. So for example, again, in Canada, I’ve heard, I haven’t really seen this in use but I’ve heard, that they are actually trying to use the heat that is generated from these mining computers and to heat industrial complexes or maybe even other types of buildings and homes. So. Okay, so there’s some mixed uses or mixed purposes of having these locations based on these farms. Yeah, from a technology or FinTech ethics standpoint, I’m curious on your perspective, should a technologist or inventor or whatever, a bank, should they have to, or should they even want to, think about the environment implications or should they just be focused on the technology in the business model that they have? So that’s a great question and I think, a really fundamental question that we shouldn’t just be considering in our course, but in a lot of different domains to be frank. I think, I think science is telling us that we are at the precipice of some really fundamental changes that are happening to the world, well, have been happening to the world, and I think if we try to silo ourselves off and say – what I’m doing is not directly related to that – I think we can collectively find ourselves in a place that we didn’t intend to be. So, I think, irrespective of industry, I think the impact that industry is having on the environment is important to consider. Yeah, there are things that we are gonna talk about in later modules. You’ll hear us talk about some positive uses of this technology. So again, it’s not just about currency. Blockchain can be used to track diamonds, to make sure that they are not conflict-diamonds or blood diamonds as they are sometimes called. To track people in terms of either people that don’t have a government ID like refugees or people that perhaps have been – migrant workers, who potentially are at risk of human trafficking and slavery. So, there are a lot of very positive utilization of this technology and even utilization that has nothing to do with currency whatsoever. But I think one of the things that we – you know, as David was just mentioning is we want you to constantly think about: what is the balance between introducing these new technologies and the positive ramifications for change, the disruption as Silicon Valley would say, to these markets, to the financial industry. But, what are also the unintended and possibly negative consequences that can come from these things? Not only now, but in the future, because if we are not thinking about those things, then by time we get to that point and we see them right in front of us it might be too late. Additional Readings 2.3.1 Applications of Blockchain Technology
So, now that we have a better understanding of what blockchain is and some general idea of its possible uses, it’s probably becoming clearer why people are both excited and concerned about the technology. From a trust and accountability standpoint, the anonymous nature of blockchain means that user data and privacy are better protected, at least within the system. But in an ironic twist, blockchain based markets are also where stolen customer personal data is bought and sold, because law enforcement often have trouble identifying the parties involved. And in a more commercial context, some are concerned that the unaccountable structure from blockchain based products, like ICOs for example, leave investors, and even in some cases the public at large, vulnerable. I think we are only just beginning to understand the incredibly beneficial aspects of blockchain technology. But from a cultural lag perspective, we also realize that we probably don’t yet understand the full extent of the challenges that will arise from its use. So let’s look further at a few examples of how blockchain can be used for both good and bad. First we will discuss a really exciting case about the dark web marketplace Silk Road, which used blockchain, and in particular, Bitcoin to create one of the largest marketplaces for illegal goods the world has ever seen. This Silk Road marketplace was like eBay or Amazon, but for illegal drugs and weapons. How could such a marketplace exist, you might be asking? Well it was hidden on the Dark Web. So before we get into the case, let’s first take a moment to discuss what the Dark web is. 2.3.2 Dark Web and Tor
Think of the internet as an iceberg in the ocean. The part that is visible to you and I, is the “surface web”, which consists of the indexed pages on the internet, such as Google, or things you might find on Amazon and Facebook. Then there’s the deep web. The deep web is a subset of the Internet consisting of pages that can’t be indexed by search engines like Google or Bing. Pages that require membership falls under this category, so like online banking, your company intranet, and the very page that you are watching this web lecture on. Then, there’s the dark web, also called the “dark net”. This is a further subset of the “deep web”. None of the content can be accessed via a normal Internet browser, instead, you need a special cryptographic software, such as The Onion Router, also known as Tor. Tor is a free software, initially created by the US Department of Defense and the US Navy in the 1990s for the purpose of secured communications. The name itself is the analogy of an onion with lots of layers, layers upon layers – as it offers anonymous access to online resources by passing user requests through multiple layers of encrypted connections. Therefore, you can think of the software essentially as a digital invisibility cloak, hiding users and the sites that they visit. And it is this anonymity of the dark web, coupled with blockchain’s relatively anonymous and decentralized nature, that laid the foundation for the infamous marketplace Silk Road, which we’ll introduce next. Additional Readings 2.3.3 Case Study – Silk Road
In February 2011, Ross Ulbricht, under the pseudonym Dread Pirate Roberts, created the website platform Silk Road, where people could buy anything anonymously and have it shipped to their home without any trails linking back to the transaction. Named after the historical trade route that connected Europe to East Asia, Ulbricht founded Silk Road with the desire to create a marketplace free from taxation and government. The clandestine online marketplace, was largely made possible by the combination of widespread adoption of bitcoin and the invisibility of the Dark Web. Combining the anonymous interface of Tor with the traceless payments of digital currency bitcoin, the site allowed drug dealers and customers to find each other in the familiar realm of ecommerce. It functioned like an anonymous Amazon for criminal goods and services. Silk Road gradually developed to look similar to traditional web marketplaces with user profiles, reviews and more. And what started out focusing on drugs, soon included other products, such as firearms. And, although the authorities were aware of the existence of Silk Road within a few months of its launch, it would prove challenging to crack down the website and reveal the true identity of its founder, Dread Pirate Roberts. In June 2013 the site reached nearly 1 million registered accounts. Thousands of listings featured all kinds of drugs, prescription medication, weapons and more, turning its founder, the 28-year old libertarian, into one of the world’s biggest drug kingpins. From its launch on February 6, 2011 until July 23, 2013, over 1 million transactions had been completed on the site, totalling a revenue of almost 10 million Bitcoins and about 600,000 Bitcoins in commission. That involves, like 150,000 buyers and 4,000 vendors. At Bitcoin exchange rates in September 2013 that was equivalent of 1.2 billion USD in revenue and 80 million USD in commission. In early 2013 a New York-based FBI team, Cyber Squad 2, had started their investigation of Silk Road. They were trying to crack the encrypted Tor network that Ulbrich was hiding behind. And like other law enforcement agencies, they were having a hard time. Even using undercover agents to try to get access to Ulbricht but they were all struggling to break the case open. Finally, through a warning note on Reddit, the cyber squad was able to find a code which was leaking an IP address, pointing to a facility in Reykjavik, Iceland. This further enabled them to create a replicate of the entire Silk Road’s system allowing them to see everything and Dread Pirate Roberts’ every move. They read through his chat logs, followed the main bitcoin server showing all vendor transactions, and even learned how he had ordered several assassinations on people who had tried to blackmail him. Eventually, an IRS investigator was able to connect Dread Pirate Roberts to Ulbricht, through an old post on an open forum where Ulbricht had asked a question about the encryption tool, Tor. Through that question Ulbricht’s personal email was revealed, which showed his full name. So, what happened next was straight out of a movie. While Ulbricht was in a public library in San Francisco, agents from the US government distracted him by staging a fight. And when he turned away looking at them, other agents grabbed his laptop and were able to secure the information – connecting Dread Pirate Roberts to his account. On the computer they secured a mountain of evidence. A list of all the Silk Road servers, 144,000 bitcoins, which at the time was worth more than US$20million, a spreadsheet showing Silk Road accounting, and diaries that detailed all of Ulbricht’s hopes, fears and aspirations. As a result of all this, Silk Road was shut down and Ulbricht, the pioneer who opened the door for drug sales to flourish in cyberspace, was subsequently sentenced to double lifetime in prison. In court, the Judge echoed that what Ulbricht did was unprecedented and in breaking that ground as the pioneer, he had to pay the consequences. Anyone who might consider doing something similar, needed to understand clearly that there would be serious consequences. And since then, similar marketplaces have been launched all over on the dark web. Some have outright just stolen their users’ bitcoins, others have been successfully shut down by law enforcement, but still some others operate in some corner of the dark web although none to the sheer magnitude of the Silk Road. Additional Readings 2.3.4 Case Study – Silk Road: Subjectivity of Ethics
Now I love this case, because it’s like straight up out of a movie, right? This 28-year-old guy, who’s seemingly very, very normal. Yeah. Neighbours didn’t have any idea what was going on, was leading, in many ways, what was considered one of the largest marketplaces for illegal behaviour that the world had ever seen before, making, you know, in total commerce was worth billions of dollars. So what do we learn from this? Well this is, that’s an interesting question. I think, there are, people get arrested all the time, right, for doing illegal behaviour, selling drugs, you know, all this kind of very similar things that Mr. Ulbricht has been charged with and convicted for, but what’s so special about this case, as it relates to ethics in general, but particularly your FinTech ethics in particular? Hmm, well a couple of things that immediately come to mind are the fact because the nature of these types of crimes, related to FinTech, has become increasingly cyber, the ways that law enforcement now have to police these crimes is also becoming increasing cyber, so a lot of the tools that Mr. Ulbricht and other people that were within that marketplace that they utilised in that dark web, law enforcement actually did use those same tools, right? So when they go undercover, for example, they’re not literally going undercover where they’re changing they’re identity or the way that they look, but they’re creating user names and other things, profiles, so that they can kind of infiltrate those market spaces, which again, could be remotely from somewhere in Wisconsin, for example. Sure. And talking to them in San Francisco or wherever he was. And you know, personally, in some of the law enforcement work that I’ve done, it’s the same thing, right? So a lot of the investigative work that you’ll do is now sitting in front of a computer trying to put together financial documents and transactions and things to kind of identify where the various actors are. The second thing that immediately stands out is, from an ethics standpoint, I find this really, really fascinating, because here, his stated mission was actually moral in nature, right? So Mr. Ulbricht was essentially trying to create a marketplace, so he’s libertarian, right? And believes that the government and regulation is inherently evil in some ways. That is what he claims, and so he wanted to create a marketplace that was free of these types of restrictions. – Government interventions. Exactly, right? And the way that he described that the government should not have a monopoly on violence, for example, especially in terms of drug trafficking and whatnot. He believed, or least he claimed to believe, that this type of online marketplace would actually be inherently more ethical and more moral than the violence that occurs every day with drug trafficking into, say, the United States, right? And it think this goes back to the subjectivity of ethics and why it’s so difficult to have kind of a global or even consistent dialogue concerning what is actually ethical, and I think it’s gonna become increasingly hard in terms of the transnational and global nature of these types of, not only crimes, but just commerce. Yeah, so those are really interesting questions for me. I think kind of piggybacking a little bit on some of things that you’ve said, I thought this was really fascinating because there have been large-scale drug, you know, sellers in the past, so that actual aspect of the crime itself is not necessarily too unique historically, but the fact that he was able to rely on cryptocurrencies, particularly Bitcoin, to facilitate the transactions. – Yeah. – In the completely digital space, which created the safety that you’re talking to, and raises questions of anonymity, and privacy, and the use cases of certain aspects of this technology, which I think is also a component, as well, and is worth considering in the context of our course. Yeah, it’s only a matter of time, I guess, before the next iteration of narcos, or whatever it is gonna be, the Bitcoin, or Silk Road version of this where they’re gonna have to explain how this entire kind of network of illegal behaviour has kind of gone crypto. Hmm. Additional Readings 2.3.5 Case Study – Silk Road: Cultural Lag
– Okay, so getting back to the conversation about Silk Road are there certain aspects of how these technologies are being utilised that kind of bode, or can help us understand what they’re gonna look like going forward? And, let me just relate it back to one of the key principles we’ve been talking about is the idea that once these technologies are out it’s very, very difficult to pull them back. And, there’s often this slippery slope kind of race to the bottom perspective. So, if you look at it from a kind of a corporate regulation standpoint companies were created and then later on because people were seeking privacy they would go out to these island nations, the Cayman Islands, the BBI, and then they would kind of outbid each other by trying to be more private and providing less information. And so, again, while that attracted a lot of legitimate business that also kind of increased the opportunity– – To abuse the system. – Yeah, legal forms of abuse that have created problems globally now. So, are we seeing this, are there other examples of this that kind of predict what this is gonna look like in the next iteration? – So, I think we’ve already seen at least one iteration, post Silk Road and one of the things we mentioned is that one of the kind of drivers of allowing Silk Road to kind of grow to the size that it was was the use of Bitcoin as the medium of transaction. And, we believe Bitcoin and these kind of cryptocurrencies provide some of level of anonymity, though the block itself, the ledger itself is exposed. And, people can see the transactions that are happening, the actual users themselves have some level of anonymity as opposed to you using your credit card and being able to immediately identify who you are. And, maybe the next well know version of this is Monero, which is another cryptocurrency, an alt coin, an alternative cryptocurrency that has developed, has grown quite rapidly the last few years. And, one of its key characteristics is that it’s even more anonymous than other cryptocurrencies. – That slippery slope. – Again, there’s potential slippery slope there. And, we see this, and perhaps one example an example in North Korea which it reportedly is used, Monero maybe potentially mining Monero to circumvent transactions in the international financial system because they’re subject to a variety of UN sanctions and restrictions from accessing traditional financial markets at the moment. And, one way they are perhaps circumventing that or trying to get around those is the use of these kind of more secretive, less accessible forms of cryptocurrency such as Monero. And, there’s a lot of reports that they’re using that as well. – Okay, so Bitcoin was utilised within Silk Road primarily because it was largely anonymous. But now, we’re seeing people leaving Bitcoin to go to something like Monero because it’s even more anonymous. – Potentially more anonymous. – Potentially, and now we’re seeing governments getting in on the game. And, these are governments that oftentimes are maybe within– – Maybe less mainstream. – Yeah, less mainstream, oftentimes kind of tied to say terrorist financing or other kind of globally sensitive political topics. I find it somewhat ironic, first of all, that you would have the growth of this next iteration flowing out of the same principle anonymity, but it does make sense especially because when you have this race to the bottom or slippery slope that’s the way it goes. It continues going down. But, I also think it’s interesting how when you look at the kind of moral underpinnings why the founders of cryptocurrencies and Bitcoin in particular, why they created those currencies in the first place it very much, like all Ulbrecht and Silk Road was in and of itself kind of based on moral principles, the idea that you wanted to decentralise the marketplace. You wanted to democratise finance. And, in many ways allow people to bypass governments and in current forms of currency, right. And so, it’s interesting that very much like the Silk Road, and it’s not to say that all these uses are bad, certainly, but it is interesting how what was initially perceived as a moral, at least partially a moral conviction is now in some ways being, again, I don’t wanna say misused, but now being utilised in ways that perhaps weren’t initially anticipated. – Sure, and so that’s really interesting because I think if you talk to kind of visionaries who kind of have a real strong view about the role of cryptocurrencies it goes right to your point about they imagine, many of them imagine a world where actually fiat currency is replaced by cryptocurrency as part of that. Because, fiat government is tied to, or fiat currency is tied to governments and central banks that that mechanism they feel is increasingly archaic. – Inherently oppressive. – Could be, could be. And so, going to more transparent system, a more distributed system of cryptocurrencies is kind of, that’s what they think the future will be. Like you’re right, there’s a great deal of irony there because now not only do you have governments trying to regulate it more they’re also getting involved in the use and production of it as well, potentially. And, there’s these kind of these minor examples of governments who have come out and said, “Hey, we may want to try to kind of issue “our own kind of cryptocurrency.” And so, there’s a great deal of irony there. Additional Readings Explainer: ‘Privacy Coin’ Monero Offers Near Total Anonymity. (2019). New York Times . Retrieved from: https://www.nytimes.com/reuters/2019/05/15/technology/15reuters-crypto-currencies-altcoins-explainer.html Jardine, E. (2018). Privacy, Censorship, Data Breaches and Internet Freedom: The Drivers of Support and Opposition to Dark Web Technologies. New Media & Society , 20(8), 2824–2843. Piazza, F. (2017). Bitcoin in the Dark Web: A Shadow over Banking Secrecy and a Call for Global Response. Southern California Interdisciplinary Law Journal , 26(3), 521–546. 2.3.6 Case Study – Silk Road: Trust and Accountability
For me, the first question is, you know, we talk about this thing called the dark web, it sounds evil. Is the dark web, is that an evil thing? What do you think, Dave? Yes! No. I think, it does show one of the key things that we’re gonna talk about as we get into the ethics of financial technology, is the way we define these things, the way that we describe them, even how we name them, will color people’s perception of them. So our bias towards something can be projected, not only in the code that you create for, let’s say, AI going forward, but also in terms of, again, just the way we characterize these technologies. So clearly this term dark web, was probably put forward by individuals who wanted this to be perceived as primarily a negative thing, perhaps from a policing or national security standpoint. But, the reality is, as David made clear, these technologies were actually created by the US government for secure communications between various elements of the US military. And, there are so many aspects of these technologies that are utilized every day in order to protect us and provide us with privacy. This is one of the major dichotomies that we have not only in terms of FinTech, but broadly in terms of regulating privacy and information in general. And this is something that has been going on for quite a long time. Because, when you talk about ethics, most people when they talk about ethics, primarily focus on what is legal, they focus on the law. On the one side you have lawyers like us, who teach people about the hardline rules – black and white rules, about what is acceptable and what is not. And those have been largely defined by society through the codes and laws that we have in place. On the flipside, you have the moral, more ambigious, sometimes subjective aspect of ethics, where this can be related to culture or history or even religion – so many aspects of culture that built into what is perceived to be acceptable in society. And, governments – the pendulum of regulation swings back and forth, in terms of how much to regulate, and then how to back off that regulation. So, to use an example, I think if you were to go to someone and you would say do you utilize communication tools like WhatsApp for example, and do you find that those communication platforms are valuable because they encrypt the communication. I think most people would say unequivocally, yes, of course. Right. If you were to say I provide you this software, but, someone from the NSA or someone from the police is going to be listening to all your communication and documenting that communication – I think most people would have averse, visceral reactions to that. So, we want that for ourselves in terms of privacy and ownership of our own data, control of what the world knows about us. But then the flipside is, there are very valid concerns in terms of safety, in terms of national security, and so you’ll see scenarios where, like the San Bernadino shooting, where big segments of the population – even though they for themselves would advocate for privacy and security – were simultaneously asking Apple, hey you gotta jump on this, you’ve gotta crack this phone so that we can ensure these types of attacks don’t continue occurring. And I feel like, this is where we are right now in terms of this dichotomy, this paradox, in terms of privacy for ourselves and the broader social good. So can we, can a single country manage that debate? Absolutely not, and this is the issue. Again, if you go back to regulation as an example, whether it’s trade, whether it’s financial regulations – even contracting. Simple things like contracting. There are challenges when initiating these types of transactions and legal relationships in a broad cross-border standpoint. And when you especially from anything that’s related to technology, especially if it’s on the internet, you’ve got servers that are hosted in multiple countries, you’ve got pretty much everything going through the US at some point right now. And you have some countries like the US that have a very broad mandate in terms of extraterritoriality to their law enforcement, where they will go into another country – they will actually nab people, very much like executives of Chinese companies that have been nabbed. Not even on US soil related to what the US government views as their right to enforce regulation. And then you have other governments that are completely stand-off and don’t even have regulation in these standpoints. Now, another example would be here in Hong Kong, very very small place, but it is a finance centre and a FinTech centre. And, a lot of the data that is here is actually hosted outside of Hong Kong, and so, the very aspect if you set up a bank account, if you click on, you know, iTunes, and you agree to your data being collected, what you may not realize is a lot of the times that data is actually stored elsewhere, so you have multiple privacy and data ordinance that regulations that are going to apply just to that one subset of data. Additional Readings 2.3.7 Case Study – Silk Road: Privacy
So that’s a great explanation – I think a great way for us to start and thinking through the topic. I guess it fundamentally gets to a core question: do people, that use these technologies, be it the dark web, the deep web or just normal everyday applications that everybody uses, do they have a fundamental right to privacy in their use. Or, by virtue of saying: hey I want to use this application, are we basically saying I am giving up some level, some measure of privacy, and is that why that pushes people into using things that are below surface internet, be it the deep web or the dark web? This has been a question in terms of the right to privacy – it’s not a new question – several centuries old question that goes back to deeply held moral and legal beliefs in terms of the rights to privacy. So a lot of major legal questions including abortion, and other things around the world actually get back to the same question of right to privacy. What right do I have to engage in an activity within my own home as long as I’m not harming other people. And this is just an extension of that, where this data is being projected publicly and it’s a really complicated issue. Because, on the one hand, when you say the right, well, first, there has to be a granting of that right. There either needs to be a legal principle for example, within a constition or within the law that says you have the right to this particular thing – in this case, the ownership or control of of your own data. Then you may even have a higher moral right, so kind of an Aristotle or even religious right to privacy. Say, I’m an individual and therefore I have the right to control who I am, my own image, my own likeness, the way I’m projected to society. But then, beyond that, you then have those kind of daily ticky-tack opportunities that are contractual in nature where we often give away these rights – and we agree to, not a violation of privacy, but certainly limitation in terms of our privacy and our own data. At least eroding our privacy. Yeah exactly. And so a great example of this is, just recently in one of my classes, I had a number of students sit down and read through the terms and conditions that they had to click to accept to use a particular, very well-known app on the phone. You can say it. Well, I don’t want to put them on the spot. And, you know, that utilizes photographs, and to every student had never read these terms and conditions, though all of them were using it. They all use it. Yeah exactly. Almost all of them are using the application, none of them had ever read the terms and conditions – and as we went through it clause by clause there was many things that surprised them. Particularly about the ownership, not necessarily the ownership, but the use of their data, and I think this will become a broader issue particularly when it comes to financial data as well. Yeah, and we haven’t really gotten into AI and facial recognition software yet, but just imagine: we have potentially thousands and thousands of images of our face, of our facial expressions, that are out there now – that we have provided to a public, well, it’s not supposed to be public, but essentially to apps and other websites that we are giving them the right to publish these things often very publicly. And when you get into things like deepfakes with video technology that can now take images from an app, say like Instagram or Facebook, and then actually alter them in a way that creates videos that are very life-like, that are very realistic. This is where I think, several years into the future, I think people are really going to question why they were so willing to put images of themselves on the Internet. There is one interesting side-note, again, not to bring this back to parenting, but I have talked to a lot of parents in terms of the way they utilize or allow their children to utilize smart phones within their personal lives. Right. This is something that I think we are all still kinda wrestling with, because we don’t understand the implications of this. So one of the things that my wife and I decided to do is to never, or at least not for an extended period of time, put photos of our children online. And the primary reason goes back to this concept, this fundamental right to privacy. Who has the right decide to put your image publicly on the Internet. And so, the example that I provided in the past is: imagine if you’re going to a job interview at 21 years old, your first interview, and your potential employer has access to 10,000 images of you from the time that you were born to the time that you graduate – and you never consented to that, you were never asked whether or not that was a good or allowable, but you put it on there. And again, this is not prescribing a moral solution to other people, but this is an example of how we as society now have to go back. Now that the technology is out there, we now have to, from a culture-lag perspective, we now have to go back and re-define how we are willing and content to engage and utilize that technology. Additional Readings 2.4.1 Case Study – Blockchain and Foreign Remittances
For our next case, let us tell you a little bit about Hong Kong, where both of us have lived for approximately 10 years. As many of you know, Hong Kong is dynamic, global, and one of the most interesting cities in the world. A part of Hong Kong’s story that most casual observers are not aware of is that embedded within Hong Kong’s cosmopolitan make-up are hundreds of thousands of women that provide childcare, home care, and other household duties for many of Hong Kong’s families. These women are designated as Foreign Domestic Workers, but usually referred to as “helpers”, “a-yi”, or “Aunty”. There are approximately 400,000 of these women working in Hong Kong, most hailing from the Philippines or Indonesia. These women are generally paid around US$570 a month or roughly US$ 7,000 a year, most of which is remitted back to their home countries to support their families. The reality is that most of these women work really hard for a salary that you and I may not consider that high, but that salary is almost doubled the GDP per capita in the Philippines. And in the aggregate, these remittances by overseas workers, according to World Bank data, account for approximately 10% of the Philippines’ GDP. So individually and at a national level, the money really adds up and the impact of these wages is a very big deal. Now how does this money in Hong Kong make its way to a family living in a village somewhere in the Philippines? Well, besides being one of our course instructors, we are really fortunate that David Bishop is one of the world’s foremost experts on issues related to domestic helpers and how to protect them from exploitation. So, let’s hear it from him about the issues these women face when sending money back home. So, for you out there, you might think that if you were going to send money, maybe you have a bank account and you would just do a bank transfer. Simple, problem solved. Unfortunately, for the tens of millions of migrant workers around the world, this is usually not possible, since they are generally unbanked on both sides. Meaning, the foreign domestic workers in Hong Kong, many of them don’t have a bank account here in Hong Kong and their families on and other side, people they are sending money to, they typically don’t have a bank account either. So the workers receive their wages in cash and they have to figure out how to get that cash from Hong Kong to their family in a remote village somewhere perhaps in the Philippines. To fill such needs, money remittance companies have sprung up all over the world, the most famous probably being Western Union. And for decades this is how people transferred money. As part of this process, there are two important things to note, which might not be apparent. First, there is a physical component when remitting money. A worker has to physically go to one of these locations to actually hand them cash. Then on the other side, there is another physical location, where the receiver has to go to pick up the money. So both sending and receiving is a very time and labour-intensive process, due to standing in lines, walking long distances, and perhaps waiting for and using public transportation, which comparatively might not be cheap. In addition, many of these workers only have one day off a week, usually Sunday, so much of that day could be wasted trying to send money home. Second, is an issue of financial literacy. These money remittance companies charge fees that you and I may consider excessive, sometimes as high as 8 or 9% per transfer. Additionally, currency conversion fees are typically not competitive. So, even if a remittance company has a low rate for sending money, they will likely make money on the currency-conversion, like when converting from, say HKD to Philippines pesos. On top of all that, sometimes remittances can take time, at least a few days if not longer. I’m not saying these companies shouldn’t make money for providing a service, but frequently their customers are not really that tinformed or have limited options. So a natural question is: what if that lost friction or time, or unnecessary fees can be avoided or at least reduced? For many, the answer to that question, or at least an important component to that question, is the use of blockchain technology. Today there are a number of remittance services that are trying to employ some level of blockchain to minimize many of the frictions that we have discussed, by promising to make remittances more efficient, secure and/or affordable. As these innovators pressure incumbents, there will be a shift, first a trickle but then a wave, as users become comfortable adopting new technology, bridging any cultural lag and learning to trust new advances in technology. Overall, the breakthrough in blockchain is really exciting and will be a game-changer for many of these workers as well as the millions of people around the world that transfer money daily. And in Module 6, we’ll be looking into a really cool blockchain remittance service company, called BitSpark, that was formed right here in Hong Kong So stay tuned for that. Additional Readings Module 2 Conclusion
We believe blockchain has the potential to be a revolutionary technology, like the Internet 20 years ago. It’s truly exciting to consider its possibilities. But like many such technologies, there are implications of their widespread use that are not always initially apparent and difficult to address once the technology has become widespread. In such situations, it’s helpful to use frameworks to consider risk and implications. One such framework that we’ve found to be meaningful is “The Blockchain Ethical Design Framework”. This Framework was written by Cara LaPointe and Lara Fishbane and was published by Georgetown University’s Beek Center, which focuses on Social Impact and Innovation. We’ve included a link to the report below. The framework they use is focused around six guiding questions when using blockchain as a solution: How is governance created and maintained? How is identity defined and established? How are inputs verified and transactions authenticated? How is access defined, granted, and executed? How is ownership of data defined, granted, and executed? And finally, how is security set-up and ensured? These are important questions and such questions can assist each of us to think deeply about the impact of blockchain technologies— either as a user or if we intend to deploy blockchain as a solution to solve a problem. More broadly, this course is ultimately about asking questions and having the desire and courage to do so. We will return to many of the themes that have emerged as we have explored blockchain and will continue to ask such questions as we consider other technologies and their applications in later modules. Ok look, we think blockchain is really really exciting, really game-changing technology. And so from our perspective, we want you to think about blockchain in the same way that we think about the Internet 20 years ago. In the mid-90s, there was this advent of a new age of information, this thing called the Internet that everyone was exited about. And as a result of that, it gave us access to more knowledge and made knowledge more available to people than at any point in human history. Over the last few years, in the wake of fake news, echo chambers, etc. as the cultural lag is catching up, we now have realized that this technology has also come at a cost. In the next module, we’ll be looking at compliance, and regulation and rules. But the thing about law is that they are typically retrospective, the look backwards. Therefore, it is important for us, collectively, individually, but especially as a society, to proactively look ahead and think about what it is that we are willing to pay, what it is that we are willing to give up in order to have these technologies in our lives. Additional Readings Module 2 Roundup
Hey everybody. We’re back with our roundup for week two. We really appreciate all the active participation this week in the discussion board. There were several really engaging discussions and questions that came through in the forums, which is always really great to see. – Yeah, honestly, one of our hopes as we design the course is that it would not be unilateral. Meaning us just giving content to you all, but multilateral in the sense that you’re participating and engaging and pushing us as well, which really seems to be happening. Seeing this type of activity is really rewarding and gratifying for us, because we think this is how some of the best learning occurs. – Now we hope to continue that in the next module. As all of you are aware, in module one, we provided a broad framework, by which to analyse some of the core technologies at the heart of FinTech, and in module two, the first technology we really consider is blockchain. From the comments in the discussion forum, it seems many of you have been thinking about similar questions. We wanted to spend the next few minutes following up on some great questions and contributions that were made. We want to revisit some of the discussion questions that we thought were really interesting and compelling in a few different ways. The first one that really comes to mind, started off with a comment that RichardStampfle made about the idea of free markets, and that generated a lot of back and forth between a number of course participants including Jstout84 and Peter-NYC, and a number of other student participants which we thought was a really great interaction and the type of multilateral discussion that we want to see hopefully generate in the forums. Dave, what are your thoughts on this broader topic about this idea of free markets? How it operates in the context of some of the principles that we’re discussing in the course? – Okay, so we’re gonna solve one of the great questions of capitalism and economics here in the next five minutes. – Five minutes. Sure, this is how we operate. – Sure, no problem. No, I mean the underlying question, and first, maybe just as a side note, some of the comments are really really well thought out and so we appreciate the high level of engagement. It’s clear that you guys are pretty damn smart, so we really appreciate the engagement. It’s been really good. Making us think a lot. Free markets versus government regulation. This is the age old question from an economic standpoint, and I think when talking about FinTech Innovations, and especially blockchain, this is that intersection, where I think people really start to have significant disagreement. Certainly, within a government regulatory standpoint, that is the reason why most governments are weary of this type of, especially with crypto currency. Who is able to control it? One of the commenters it was Jstout84. I’m just gonna read this, because he had a really good quote from Thomas Hobbes. He said, quote, “People live nasty brutish lives. Always seeking to undermine each other.” Close quote, and he said is it too idealistic to think that completely free markets can function for the benefit of all, and I think again, this is the question for this course. People call it the fourth industrial revolution. I think from my standpoint it’s really just capitalism 2.0 or 3.0. Can we as a global society, come together in such a way so that these massive, currently disparate resources, can be shared in such a way so as to maybe not make complete equality, but make it so people don’t feel so disengaged, so separate. – Marginalised. – And then potentially which, obviously can lead towards violence and other types of challenges. I think that is the great question, that the original commenter was asking, and I think in module five, we’re gonna take this question a little bit further, and really look into that. A lot of the FinTech Innovations were started based on these broad questions of decentralisation of power, or democratisation of finance, and really about eroding these power structures that have existed for centuries. Sometimes a millennia, and the question that we have for you in module five is, will governments, will large institutional holders of power like banks, will they actually allow that to happen, and I think again, that is the underlying fundamental question. I think to answer the specific question you asked, I don’t think it’s gonna happen any time soon, but it is really exciting. I think the way I would flip it around is if FinTech works the way that some people think that it will, will they have any power to stop it, or is it just an eventuality, where cryptocurrency, various forms of decentralised systems, will make it so that the very concept of government or finance and stuff just becomes eroded and just transforms into something new. I think that’s really. Not anytime soon, but it’s a very interesting hypothesis. – I think maybe to piggyback on some of those points, ultimately it just comes down to a tension between where we think regulation plays versus where we think this invisible hand idea. That Adam Smith talked about, and I think the way we think about it, is they exist on a spectrum and that spectrum can change depending on which industry we’re talking about. Depending on which country we’re talking about, and which microcosm of the economy we may be talking about at a particular time, but do some new technologies that come into play, maybe some things we talked about module five, or things like smart contracts, or other things that we end up touching on, how does that help facilitate, less friction in transactions? Which is ultimately at the heart, I think of trying to, at least one school of thought is when we bring regulation into financial transactions and the economy, this is one school of thought. We want to remove transaction costs as much as possible, but at the same time have a level of fairness, and protection of certain players in the game as well, and so this is an interesting tension. Unfortunately, I don’t think we answered it in five minutes, sorry. – No, we probably didn’t. I mean again, you guys can answer these questions in the forum perhaps better than we can. It’s clear that you guys are really really highly thoughtful on these types of questions, but I do think it brings up, another thing that within the forums and within the questions you’re asking, does seem to come up over and over again. There’s this interesting dichotomy, or in Chinese, they say, (foreign language speaking) when these two opposing forces, that a lot of you have identified in this kind of FinTech space, and the idea is that on the one hand we don’t trust traditional financial institutions, maybe even governments, so therefore we want to, we hope that financial technology innovations push us towards a more democratised, decentralised system, and yet when we asked you who do you trust, it was. – Traditional financial institutions. – Traditional financial institutions, right, and the reason why, the underlying reason why, is because there wasn’t a track record and more importantly there wasn’t a regulatory structure that provided that kind of social safety net, the insurance, the other types of, basically that framework that ensures that if you invest in some cryptocurrency or other system, that your money will be safe, and I think this is such an interesting dichotomy that we’re running up against as a society, and it leads into for example, one of the things that we talked about this time, smart contracts. So we had people talking about how smart contracts could or would, or maybe even are influencing their lives, and so people, a lot of people touched on real estate transactions. They touched on related to real estate was the actual storage of documents, like deeds and other land records in government systems. What were some of the things that stood out to you in terms of some of the issues or ways that people saw smart contracts, becoming more relevant in our lives, that you think stood out? – In terms of the relevance of smart contracts? – Yeah. – Yeah, so I think smart contracts, it’s kind of almost like a buzzword. It sounds great. I think a lot of legal structures, or just law in general, we want to make sure that we get the right structure in place, before we obviously implement, and this is I think where, some of the potential issues could come, because if we don’t think about this comprehensively, then it can create potentially more problems than it solves or at least they kind of cancel each other out, and so that’s probably where the real concern is for me in particular. – Can I give a quick example of that? – Yeah. – One of the things that came up regularly in the comments was real estate transactions, especially the buying and selling of homes, and it was really clear that a lot of us are tired of paying all those middle man fees. Tired of paying for real estate agents. – Broker fees. – Yeah, broker fees. – Lawyer fees. – Lawyer fees. There’s so many people in the middle that are grabbing pieces of that transaction, so I think very naturally, because that is the largest investment that most people make, that is also the largest type of transactional fee that we tend to pay, and it’s really easy to look back and say, I’m buying this house. I’m taking all the risk. Why in the hell am I paying this money, this fee, to a real estate agent, who just unlocked the door for me? It does seem very natural. Now if you didn’t remember from my background, my original legal background was in commercial real estate, and let me give you the flip side to that coin, because I think this goes exactly to your point. The reality is that real estate agent, and agent is a legal term, right? That means a fiduciary relationship, which we talked about already. This idea that it is someone who is put in a position of trust, and therefore they have higher standard of trust and legal requirements, because they are meant to be there to help you and guide you. – Normally when we talk about agent, we talk about principal and agent. – Exactly. – Somebody delegating trust, authority, power to the agent, right? – Yep, exactly. – In this case the property owner. – Yeah, property owner, or the purchaser, you have to use an agent, legally, in many cases, in order to go through that process, and again it seems so unnecessary, and why would we pay that person for doing something so little, and just the one thing that I want you to look at going forward is, you’re not necessarily paying them that fee for what they are doing, you’re paying them that fee, in order to ensure they do something if it goes wrong, and this is the problem. This is why we have that same dichotomy, with the traditional financial system versus that blockchain based cryptocurrency. Whatever FinTech system, is you are paying the lawyers. You’re paying the title insurance company. You’re paying the brokers. You’re paying all of those people along the way to protect you if something goes wrong. 99.999, whatever percent of the time, nothing goes wrong, and so therefore it seems like that was a waste. – A wasted cost. – Yeah, exactly, a wasted transaction. You think oh my gosh, they got $1,000 for doing nothing. I can tell you through, as a former corporate lawyer, in the commercial real estate space, it is absolutely money well spent most of the time and I’ll give you a quick personal example. My wife and I purchased a home at foreclosure before we moved to Hong Kong about 12 years ago. We recently sold that home in 2017, and when we sold that home, we realised because the lawyers found it, that they had actually recorded the legal description of the land incorrectly, and because of that they actually had to go and find the original owner, get them to sign a new deed, a corrected deed is what it’s called to have the actual, the proper. – Proper description. – Description on there, and if they didn’t have that, then the buyer would not have been able to get all of their parties to line up. The mortgage company, the title insurance company, et cetera and we would have been stuck with a house that we would not be able to sell, right? So on the one hand it means that the original lawyers and stuff didn’t really do their jobs, but on the other hand it meant that because we paid these fees to these people, they were there to protect us, and I think again, this gets at the heart, I’m not saying that they’re worth all of their money. I’m not saying I don’t also feel the sense of anger when I have to pay someone a fee that I don’t really necessarily think they they deserve, and as a lawyer I’ve been on the other end of that. Probably received money that maybe I didn’t deserve in the traditional sense, but the reality is, the system is there specifically to deal with that dichotomy that we’re now facing. We want that protection and we need it, but now technology could give us extreme efficiency but with that efficiency comes less certainty and less protection, so the question is, how do we develop the efficiency, but also maintaining the protection? I think that’s really hard. If it’s self executing, that’s really hard. – That’s very difficult, and so I think at the heart of a lot of the technology we talked about, be it from blockchain or AI powered mortgage lending decisions or anything along those lines, I think that ability to have recourse is really at the heart of a lot of these things, when we talk about free markets, regulation, the role of the law. How does that play? Ultimately, at the end of the day, if something does go wrong, you want to be able to have some level of recourse. – Absolutely. – Ideally being able to talk to somebody. – Absolutely. – I think a large part of. – And the bigger the dollar value is, the more you’re gonna want that. – Want that, right? I think a large part of the issues that we face when it comes to new technologies is that up till now there’s no, nobody has really articulated a solution to what that recourse would be specifically, so if a particular, a block is incorrect, if whatever AI decision making that’s going on in a particular company comes out for whatever reasons, wrong decision, how, what is the recourse of the person who is impacted by that? – Yeah. – I think again, that cost, both in financial but in time emotion, is pretty steep for the average person. – Very significant, yeah. Going on to some of the other comments that you have in regards to blockchain, and the way that it could potentially impact our lives, there was some things that stood out to me that I hadn’t really thought about, especially in terms of my day to day life. Some commenters talks about traceability of products. So product liability is a very serious thing. You want to know that the food that you’re eating is safe or that the diamond that you’re purchasing is accurate, or whatever, like it’s described accurately, and there are some really interesting descriptions about that. Here in Hong Kong someone mentioned the milk powder which is sometimes difficult to trace, and there have been concerns over milk powder. – Baby milk powder. – Baby milk powder. There’s some concerns several years ago about the source of milk powder and what it contains, and maybe if there’s some traceability on that. Maybe in terms of fair trade, things of that nature. One thing that wasn’t mentioned, that I thought would come up, because this has come up globally, in certain contexts, is actually voting, and voting not in terms of cryptocurrency, which we did talk about, but actually voting in terms of governments. – Political elections. – Political elections, yeah. The immutability of the blockchain, does mean that theoretically, if you wanted to increase the number of voters, then the best way to do that is to not make them physically go to a polling location, but let them do that on a mobile application somehow, that didn’t come up, but maybe throw that back at you. Is that something you think that governments would potentially allow at some point? – I think the other thing that’s really interesting when it comes to blockchain and application. I think one or two commenters did talk about this, or alluded to it at least, is the idea of property records, or title. How do we track that? This is obviously super important for governments as well as homeowners like you were talking about in your own personal experience. I think in a lot of places in the world, particularly where database records are not as comprehensive as people would hope they would be or not as clear, one solution that people are hopefully pointing to is blockchain, and we see this in countries where that title record or the deed record is really spotty, in some senses, and if you were to be able to get that in place, then you would be able to clarify a lot of potential issues, and unlock a lot of value for people that own this property to be able to utilise it in different ways. The real impediment or at least one of the key impediments is actually doing that though is putting the right records in to begin with. – It has to be proper in the first place, otherwise you’re just gonna have immutable bad data. – Then going back to try to fix that, it goes back to what we were talking about. – I don’t think many of you perhaps realise how inaccurate a lot of real estate records really are, or maybe you do recognise it, and that’s why you’re suggesting this, but taking Hong Kong as an example, it’s one of the most modern economies of course, and as a Commonwealth country, with the legal system where it stems from the U.K. system, if you studied in the common law, or if you worked in a common law country as a lawyer, then a lot of your legal training, can get transferred over. The vast majority of it gets transferred over, but here’s the thing. Every single lawyer, even if you went through the Commonwealth system, has to take the conveyancing course, because the legal conveyancy. – What’s conveyancy mean? – Conveyancing is when you transfer ownership of real estate to another person. It’s so messed up here, that everyone has to take it. – They have their own course that they have to take. – Yeah, everybody, no matter how much experience you have. I as a former commercial lawyer, in a common law country, that has the same legal background, they said if you’re gonna get this transferred over, you still have to take this conveyancing system or course because it’s so different than everywhere else, and the rumour is, I don’t know if this is true, but the rumour is that if you look at the deeds. If you go back far enough, any property in Hong Kong, you could say there’s a conflict in terms of ownership, and again, I’m not saying that’s true. The point is simply to show that. – Even in sophisticated markets. – Exactly, where they’ve been keeping records a long time. – If you think about markets that are for whatever reason less sophisticated, then there will be a litany of greater issues to address. – It’s almost like the less developed it is the more likely it could work, because then you almost have a clean slate. – This is a real difficulty in a lot of places in the world when it comes to real assets. Particularly property, and how those are gonna be conveyed or sold or used and to get clarity on this would actually help a lot of these countries and their economies. – Oh yeah, a tonne, and so going back, we mentioned voting, although in the political standpoint, but voting did come up in this module, so we want to talk about that briefly. One of the, some of the more interesting conversations I thought were again, very clear that many of you understand this as well or better than I do, especially on the technology side, was about the blockchain, and from a governance standpoint, the voting mechanism, whereby blockchain is controlled. We mentioned that the majority of blockchain and cryptocurrency is typically governed under a one vote system a majority rule system, excuse me, and we talked about is that the best way to do it? What were some of the ideas or thoughts that came up for you? – I think the great analogies that usually come up when we talk about how blockchain is governed or how these different industry groups, because there’s all these different kind of communities of different blocks and chains and applications that are out there. The initial analogy is always corporate law and the principle majority rule. Besides probably your elementary school teacher using that to say what are we gonna do next? Are we gonna go to recess? Are we gonna eat our snacks? I’ll let you vote. Beyond that. – That wasn’t my elementary. She was a dictator. – But beyond that, corporations as a general rule, one share one vote, which we mentioned in our course, and this is something that somehow is ingrainable to the political side, of how a lot of nations govern, but also on the corporation side. Now obviously there ae exceptions to that and some of the commenters had some great points related to that with respect to, what would be the alternatives? – Yeah. – So we think of super majority. Voting in the case of special resolution. Usually require a super majority, which could be 75%, and then is that the criteria we want to use if we want to fundamentally change things, or other people talk about cumulative voting, where people can load up votes on a particular item so to speak and then contradict their votes, which would help smaller minority type voters, and these are really interesting discussions that have. When we think about how we want to govern these things, because we know at least in certain types of blockchain communities so to speak, that there’s some concentration of power by concentration of either the tokens or whatever they’re using to use as your vote to say how many votes you’re gonna have. – This kind of came up. In case you’re not going there, several people mentioned a quote, 51% attack. What is that? – Yeah, so 51% attack is a little bit different in the sense. It all falls under the umbrella of governance, but it’s a little bit different in the sense of pure voting. In the sense of either. When we were talking about how for example, bitcoins, if we use that as an example, there are a number of computers out there. Large oftentimes very focused special computers out there that are doing mining to calculate some sort of mathematical problem, and when that’s solved, it pops out a coin for you basically, right? The idea of a 51% attack, what that relates to is, unless somebody owns the networking power in that particular chain of over 50%. Once they get to 51%, they could actually change the records in the ledger. In the blockchain ledger, they could change the record. Now until you hit that threshold you normally can’t. This is what they were talking about 51% attack. Is that a risk basically? Certainly, in certain communities that is going to be a risk. Particularly ones that are not distributed, and there’s that side of it. Oftentimes on the voting side, which will happen, if minors for example, or mining problems, and as a reward for mining the problem, they get a cryptocurrency of some sort. Sometimes certain communities will use that as the voting metric. If you’ve popped out 100 coins, and you have 100 votes maybe, to decide how things are going to work out, if we’re going to go this way or that way on a particular problem. I think those are where those two things are linked, but I think in terms of the voting, if we go back to I’ve got X amount of coins or whatever then should I be able to vote, one share, or one vote per whatever I have, or should there be some other type of system in place and I think this is where the comments and the debates were when it came to that. – Yeah, and this actually relates to it may not seem, when we threw in the section about the environment and we talked about electricity, we probably didn’t do a very good job of making it clear how related these two parts are. Let’s just talk about that briefly. We talked about the electricity component, to talk about how the utilisation of new technologies can have broader implications, often negative, and so therefore we need to think about those implications but here’s the connection to the governance part which we didn’t really explain very clearly. The idea is if it requires a lot of electricity, in order to mine these coins and to gain some level of control it also means that the people that are going to have the most control and the most shares of that whatever the crypto or blockchain is are also going to be the people that have the greatest access, to the electrical system, and so what many of you may not realise is that a lot of the people that control big percentages of coins and other blockchain based systems are actually quasi governmentally connected either through personal relationships or actual government support, where they can just use massive amounts of cheap electricity, in order to mine these massive mining farms, where they’ll mine these coins, and so that means that you have, if a government, let’s say, or someone connected to someone in power, wanted to gain control over those things and they had that massive ability, then they could theoretically gain some level of control, maybe even majority control, and change the rules within that system and this is an interesting again, this paradox where it’s not like, one coin, one vote system, where all of those coins are distributed equally around the world. It’s actually those that have literal access to power in this case electricity, are oftentimes the ones that then get to write the rules for those things. – Yeah, so there’s definitely a connection between those things, and I think if you’re the average cryptocurrency enthusiast where you’re buying fractions of bitcoin or ethereum or whatever, these are things you generally don’t think about, but I think clearly in a macro sense they’re definitely sociopolitical and socioeconomic links that are definitely driving or providing the structure to the market that we play in. – Exactly. – In the context of blockchain, and some of the other technologies and themes that we end up talking about through the course, we often talk about remittances. Particularly overseas remittances. I think you had an interesting experience with that recently. – I did, so this is one of the areas again. Probably because my interest in migrant workers and trafficking and other things in that area, that I personally am really excited about from a blockchain perspective, the idea of someone being able to transfer money, peer to peer very quickly. Perhaps using a mobile device, instantaneously, with very very low fees. I’m super excited about that. Not possible for everybody yet, but I think within the next three or four years, certainly five or 10, it will be available to pretty much everybody assuming countries allow their currencies to be converted and what not, but so this week actually, just by happenstance, was in a position, where I was transferring money to a friend of mine in the Philippines, and so I had to physically go into the downtown portion of Hong Kong. I had to go to this place called the Worldwide House. I had to write out something on paper, in this really old form, which by the way they messed up anyway, and they actually took my name down wrong on the transaction form, even though they had my ID and everything, and it was a cumbersome expensive process, relatively speaking. Now the money made it there, and so I think if you think over the past several decades these remittances have allowed migrant workers to really spread out across the globe, and provide for their families and friends, really from everywhere. It’s amazing in that regard, so don’t get me wrong, but I can see what’s coming next, and I can think about as David, you said in the module, even if you can just decrease those fees by 1% or 2%. – Big impact. – You’re talking about billions of dollars going to the developing nations of the world. I think it’s really really exciting. What I did, I never do this, but I actually took my phone and filmed using a little vlog or selfie, whatever it’s called. Whatever the kids are calling it these days, and made a little video of myself. We’re gonna put that together, and send that out to you as well, so you can see what we’re talking about when we talk about the remittance process. – Overall the experience was. – I’ve done it enough times that it was to be expected. It was not good nor bad. The money made it there. I’m grateful for the process, but I’m really excited for the day when I can just do it on my phone. – Well that’s it for this round. Thanks again for your participation and contributions. We’ve thoroughly enjoyed reading the comments and discussing the ideas amongst ourselves as well which we frequently do after reading them, we’ll ping each other or talk to each other, see each other. – A lot of questions. I have no idea. What do I say? – These are really great insights, that you all are sharing which we appreciate. Now moving on in our next module, we’re gonna explore cybersecurity and crime, which will build on what we’ve already covered in modules one and two, so we look forward to seeing all of you again in our next roundup after module three. Module 3 Cybersecurity and Crime 3.0 Module 3 Introduction
– Welcome back. In this module we’re going to explore a really interesting part of FinTech that frequently ends up in news reports. Cybersecurity and digital crimes. The ubiquity of technology and our reliance on in daily life makes cybersecurity a really important and fascinating topic. – Now, I’m sure you’ve seen reports of hacks and personal information of millions being exposed. Or perhaps you’ve even been a victim of cybertheft or other digital crime yourself. Now as devices, accounts and other aspects of our everyday life become more interconnected, the convenience that we gain is also balanced by the necessity for cybersecurity. Now, for many institutions, cybersecurity is somewhat like the story of Sisyphus in Greek mythology. Now, if you’re familiar with Sisyphus, he was sentenced to roll a large rock up a hill that would roll back down after it got to the top at the end of the day. And this forced him to start over again, day after day after day. Now, similarly institutions are under near constant attack by cyber attackers, with new threats always appearing. So who is responsible for thwarting these threats and protecting user data? – And for all the benefits we believe FinTech’s rise will create, FinTech’s potential for good is also tempered by the potential for it to be used for illicit purposes. Given that, it’s really important for us to consider these risks through the principles of trust, accountability, proximity, privacy and cultural lag that have served as touchstones throughout the course. So in this module we want to explore topics of cybersecurity and digital crimes and their importance in considering FinTech through some movie-like, but actual true stories. So to get us started, we’re gonna look at a billion dollar bank heist. 3.1.1 Case Study – Billion Dollar Bank Heist
In February 2016, at Bangladesh Central Bank’s headquarters in Dhaka, something occurred that laid bare the profound weakness in the global financial system. When banks move money around the world, they use a system called SWIFT – Society for Worldwide Interbank Financial Telecommunication – which is a consortium that operates a trusted and closed computer network for communication and payment orders between banks. And today, SWIFT is used by over 11,000 financial institutions and more than 200 countries and territories around the world. And one of them is Bangladesh Central Bank – BCB – with its headquarters in Dhaka, Bangladesh. On a daily basis, staff members at BCB would go into a highly secured room with closed-circuit security cameras, log into SWIFT and dispatch payment orders with encrypted communication. 8000 miles away though, the New York Federal Reserve Bank is the gatekeeper of much of world banking, and hosts accounts for 250 central banks and governments – including the BCB. When the New York Fed receives a payment order, it follows the instructions and sends the money to the recipient. At the same time, it sends a confirmation back to the center, which in this case is BCB, marking the transaction completed. This process happens all around the world, every single day, with about $5 trillion dollars being directed via SWIFT. And the system is designed to be unbreachable. On Thursday, February 4th, 2016, 35 payment orders using the credentials of the BCB’s employees were sent via SWIFT to the New York Fed. 5 of them went through, but the other 30 requests were blocked as the Fed system had detected a sensitive word in the recipient’s address and therefore flagged those transactions as suspicious. The next day, a total amount of $101 million was successfully transferred from BCB’s account to several accounts in Sri Lanka and the Philippines. But in the SWIFT operation room in Dhaka, it was quieter than usual. The printer was malfunctioning, so none of the confirmation letters got printed. They didn’t think much of it, assuming it was a small mistake, and were going to fix it the next day. After spending hours on Saturday getting the printers to work, the 35 payment requests caught the BCB employees by surprise, and the SWIFT communication system was still not working. Assuming they were mistakes, the BCB employee tried to contact the New York Fed via email, phone and fax to cancel the transactions, but the Fed was shut down for the weekend. On the following Monday, BCB was able to get the SWIFT communications system working again. And it was not until then that they realized that the most daring bank robbery ever attempted using Swift had happened, four days ago. It would prove to be the most severe breach yet of a system designed to be unbreachable. It turned out that the hackers had installed malware on BCB’s servers that had sent the 35 payment instructions and which deleted any incoming notices of the SWIFT confirmation messages. And, when the Fed was back in business that Monday, BCB was able to reach out and ask them to block the money transfer – but, it was too late and the money had already been sent to the recipient banks. So they sent SWIFT messages to the Philippines bank, RCBC, but it was a public holiday in the Philippines, so they would not be read until Tuesday, February 9th. And by that time, the money had already been transferred out. Some funds were transferred to Sri Lanka and those funds were later recovered because of a misspelling in a word in the instructions, which triggered an alert at the local bank. But the 81 million USD that went to the Philippines was not recovered. That money were sent to four fake accounts at a small Manilla branch of a bank called RCBC. And from these accounts, the money was taken out and laundered through the Philippines casinos – never to be recovered. Now at the time, the Philippines casinos were not covered by anti-money laundering laws, and so it was a nearly impossible to track the money. As of today, most of the money is still nowhere to be found. Other similar cyber crimes have been reported elsewhere, such as in Vietnam and Ecuador, and other cases will likely come to light – and the hackers, however, have yet to be identified. Additional Readings 3.1.2 Case Study – Billion Dollar Bank Heist: Proximity
So if I’m thinking like an old school movie where you got a cowboy and he goes in and he’s like robbing a bank, you know, he can only take away what he can physically carry. In fact, that’s kind of the component in a lot of these movies with bank heists and stuff is like the physical weight of the money is actually a challenge and they have to like kind of balance the risk of getting caught, and getting away… – Speed. – Speed, all those things, right. So it’s really like an entire action-packed scenario. But what you’re saying is like in this type of a scenario, they can shoot the money all around the world to different accounts, maybe some of them land, maybe some of them don’t, and then they can kind of pull the money out of the successful transactions and in this particular instance, unlike a physical bank heist where you might walk away with a few thousand dollars, maybe a few million dollars, most of this was unsuccessful and they still received $81 million. – That’s correct. So if you think about a traditional bank robbery, you know, you only get one chance at it, really, at a time. But… – Unless they’re really bad. – Unless they’re really bad. But in the cyber heist situation, even in this example of the Bangladeshi kind of central bank heist, there was multiple 30, 40 plus instructions that they kept trying to get through and so you can think it’s almost like a spam. – So in a lot of things that we talk about in this course when we talk about ethics, a lot of decisions about whether or not to do something for a moral reason comes down to this concept of proximity, which we talked about earlier in the course. And in this particular instance, it seems like let’s say you’re going to walk into a store and you’re gonna steal a candy bar. You have to worry about the physical proximity of walking past the person and the psychological pressure of going against kind of these societal rules. But in this particular instance, you have some guys from some random country somewhere that they’re not gonna see the outcome, they’re not going to understand who they’re harming. It’s gonna be disparate, right, the opportunity for catching them is very, very low and as you said, the cost per infraction is minor. So once you figure out the code or the script or whatever necessary to kind of – You can just keep pinging. – Just keep doing it over and over and over again and then if something hits, you could get $81 million. – And to your point, because of that, in the situation you explained, because the distance geographically that you have from location of the crime to where this person might be, it removes that kind of connection with humanity which then makes it easier to perpetrate certain illicit activities. – So that then brings up another question. Cause it’s not just the psychology of moral decision-making but it’s also the very practical elements of enforcing the laws. – Yeah. – I mean, it must be that it’s incredibly hard as FinTech gets better and more efficient, if the criminals get better and more efficient with the utilisation of FinTech, it’s gotta be harder for enforcement of these rules. – That’s right, and that’s absolutely correct. So one of the large initiatives that many nation states, many governments are trying to rely on now is kind of the cyber security cooperation so between countries because the nature of what you described, it could be criminal activities, it could be other forms of access to data that we may want to control, it’s very difficult to coordinate information investigations as well as prosecute people potentially doing wrong so even in this Bangladeshi kind of this bank heist situation, you know, the Bangladeshis involved, United States was involved, the Philippines was involved, Sri Lanka was involved, and so because of this kind of transnational aspect of this, it’s very difficult for particular single nation states to deal with that alone so there’s been a lot of movement towards cyber security partnerships and alliances amongst countries in order to try to help manage this problem. Additional Readings 3.1.3 Case Study – Billion Dollar Bank Heist: Accountability
So, again, coming back to this. The kind of the moral decision making. The psychology of business ethics or FinTech ethics largely comes down to the idea of how our actions will impact those that are around us, right? And, how our actions may, perhaps, harm someone down the road. So, in this particular instance, and, from a legal perspective, the law is trying to ensure that those who are in a position to stop bad things from happening, stop it. And, then those that are harmed, they then receive some type of redress for their harm. Their compensation, or whatever. So, in this type of challenging cyber security situation, there’s a really simple question. Who is actually injured by this? And, then how do you, then, compensate them, or help move beyond that? ‘Cause if you can’t identify, from a law enforcement perspective, if you can’t identify who is harmed, oftentimes it’s gonna be very difficult for a government to have enough courage, or to marshal the resources necessary to really help those people. So, who is harmed here? – When you kind of think about what was the after-effect of this. You know, one of the things was, Bangladesh ended up suing this bank in the Philippines, where this money was transferred to. Now, ultimately, at the end of the day, this bank in the Philippines, may or may not have followed proper procedure. But, I think it’s very difficult to say that they were the perpetrators of the actual crime. And, somehow, they’re being held accountable for a minor mistake relative to the magnitude of the actual crime. And, so, to your point, I think there’s a large amount of disconnect, because of being able to hold people who are actually responsible, and being able to hold them accountable is pretty difficult. – So, then that brings up another related question. Is the idea… There are multiple parties along the way that are touching this transaction. Right? So, you have Bangladeshi, maybe regulators. You have Bangladeshi bank officials. You have the US Fed, and those who are touching it on the US side. You have Filipino banks, regulators, et cetera. And, really players all around the world. So, who is in the best position to actually stop this? And, who should be responsible for this type of a transaction? – So, I think there’s a lot of debate around that. And, to be honest, I don’t know if that’s actually settled. I think, for crimes committed in particular countries… So, if we could identify the source country of where the hacking occurred, that would of course be a locus of the crime. So, you would maybe have some prosecution there. Apparently some of that money was, of course, sent to the Philippines, and then kind of cleaned, or laundered, through casinos in the Philippines. And, so, it seems there may have been some sort of criminal activity there. And, of course, for people who use that money, and put it into the casinos, they may have been involved in this. But, they may not have but they have just been engaged and they may not have understood the full magnitude of where the money came from. Who knows. But, then again, there’s a locus of the… But, each of those crimes, though they’re part of the larger narrative… It’s very difficult for a local prosecutor, say, in the Philippines, or, wherever else that touches on this, like you were saying, to connect all the dots. – Yeah, you wouldn’t even have access to the information. If a Philippine official contacted the US Fed, there’s no way they’re gonna give them information… Well, it’s unlikely they’re going to give… – It’d be very difficult. – Very difficult. Very difficult. And, it would require, like national level support. Okay, so then, in thinking about how we move forward for these types of things, especially in terms of considerations of the future… ‘Cause this is only gonna be easier and easier, right? – So, let’s move this now to a distributed ledger type of system with like blockchain, or other types of cryptocurrencies that would be involved. Do you anticipate this type of thing being more or less likely, from it actually occurring? And, would it have changed the process of actually securing the funds, or finding who was responsible? – So, at a basic level, the issue is not the integrity of the blockchain or the ledger itself. The issue is… Once those coins have been distributed, how you hold them or store them. And, we have a series of examples of certain types of wallets. Or, exchanges being hacked where people are able to access those coins. Or, sometimes coins being held for ransom because people… they’re able to get hacked and not be able to, they lose access to them. And, so that raises another interesting question. Additional Readings 3.1.4 Case Study – Billion Dollar Bank Heist: Cultural Lag
– So, David, what did we learn from this case about cybersecurity and crimes? – Well, what I’m learning is that it’s really complicated. The cross-border nature of it means it’s incredibly difficult to enforce. It’s not just about the money, but there is reputational damage. There is embarrassment for governments, embarrassment for people. It’s really broad and wide-scale. There’s very little risk for the people that are actually going out and committing these crimes, and the cost to them is very, very little, so they can just spam these things out there and still make a really significant amount of money. So in this case, they failed the vast majority of the time, but they still walked away with $81 million. In a normal bank heist, that’d be like the biggest bank heist of all time. – Sure. – One thing that I don’t completely understand is what are some of the failures that allowed this to happen in the first place? I mean, how does this even happen? How do you lose $81 million? – Yeah, yeah. So the interesting thing about this is, when we think about cyber crimes, so frequently, obviously, there’s a technological component, and advances in technology will kind of create more opportunities or different methods in which to steal money or, as we’re going to look later, to steal data, but, if we think about this Bangladeshi bank heist situation, it’s really interesting because it wasn’t just about the technology. It was a confluence of different factors. There was people involved. There were processes that failed. There was old equipment that should’ve been updated that wasn’t, and so a confluence of these things actually led to this really significant outcome, and so I think, in a lot of situations, if companies, banks, and governments do, use a lot of the basics of security, making sure you have good anti-virus software, making sure you’re trying to filter out as much malware and viruses as possible, making sure you’re using firewalls. I mean, these are things that weren’t actually happening in this Bangladeshi bank case situation, making sure that you’re having updated equipment, making sure that your people understand the processes. So, for example, after this happened, and with the Bangladeshi bank, SWIFT and the New York Fed put out kind of announcements about, “Hey,” to be aware of this, and so, when subsequent attacks or attempts happen, like in places like Vietnam and Ecuador, they were actually able to stop that from occurring, even though they were trying to, again, manipulate SWIFT transmission codes and things. – So I guess the takeaway is, on the one hand, it’s kind of amazing that we’ve gotten to the point as society where, with the push of a button, money is flying around the world. This probably happens millions upon millions of times. – At least. – Every single day, right? – At least. – And so, on the one hand, although we’re focusing on the negative side of it, it’s actually pretty amazing that governments, and even in developing countries, say, Bangladesh, they’re able to bank with the New York Fed or transfer money all over the world, and most of the time, that works out for the benefit of everybody, right? But the flip side is, we also have to be cognizant of the challenges and making sure we’re staying ahead of these things because obviously, one of the big things is that, we’ve talked about a lot in this course is that the law and punishments are often retroactive and reactionary, and they’re not really able to stay ahead of these problems, so I assume we’re going to keep seeing some aspects of these things going forward. – That’s right. – We’ll always be, unfortunately, we’ll always be responding to the last crime or last situation, and that also maybe means unfortunately, we may not be able to react effectively to the last situation either, as it takes time to put in good policy, as it takes times to get everybody kind of onboard, to kind of, to the point that you raised earlier about the difficulty of enforcement, if these are transnational crimes, you have to get multiple jurisdictions involved- – Yeah. – To kind of police together. – It actually reminds me of, have you seen the movie, Catch Me If You Can? – Yes. – The Steven Spielberg movie? – Yeah. – So, so. – Great movie. – Yeah, Frank Abagnale is one of, considered one of the greatest fraudsters of all time- Right? And so, recently, within the last few years, he spoke at Google, and it was a really widely watched video where he was giving a speech about his life, and it was so compelling and really interesting, and one of the questions at the end that kind of struck me was, somebody from Google asked him, “Do you think you could’ve been successful as a fraudster today, given all the advancements of security and technology?” And his response, quite famously, was, “It’s easier to be a fraudster today than ever before.” And he said that he would’ve made so much more money had he tried to commit fraud in this day in age than it was at the time. So I think it’s kind of interesting. As technology’s advancing, more good things are happening, but it also widens the door for people to abuse the system. – Exactly. Additional Readings 3.2.1 Case Study – Apple v. FBI
Okay so, we talked a lot about data and the importance of data, but who’s responsible for protecting it? As we consider this question, let’s think about another situation. In December 2015, unfortunately, there was a terrorist attack in San Bernardino, California. The two attackers were eventually killed and the authorities recovered an Apple iPhone from one of the attackers. The FBI, however, were unable to access the iPhone because it was encrypted, which basically means there was a security password needed to enter the phone. The problem the FBI faced was that if they entered the wrong password a certain number of times the information on the phone would be totally erased. So the FBI went to Apple and asked them to decrypt the phone allowing the FBI to access to the information inside. From the FBI’s perspective, they thought that this was important because the information was necessary for their investigation, and could even prevent a possible, future terrorist attack. From Apple’s perspective however, they thought that this could potentially lead to what lawyers call a “slippery slope” — basically a precedent that might ultimately lead to greater intrusion and other privacy issues for its users. As a result, Apple rejected the FBI’s request to provide access to the locked iPhone. Now, this was so important to the FBI that they actually went to court to try and compel Apple to provide access to the phone. After some posturing though, the case was never tried. And so we don’t really know the exact answer to the legal question of whether the security risk was enough to compel Apple to open the phone. But this story raises a lot of very interesting questions that we need to consider. So, for our students – in this situation, let me turn the question to you, how do you feel about Apple decision? And why do you feel that way? Additional Readings 3.2.2 Case Study – Apple v. FBI: Trust
– So Dave how about in your situation, how do you feel about this? – Well if I understand things correctly. On the one hand, you have a really challenging scenario where as a government you’re trying to prevent crime. Alright, and one of the things we’ve talked about in this course, Is that any type of criminal prevention is largely reactive. And so as a criminal agency or a law enforcement agency, you want to be as proactive as you can and predictive as you can so that you can stop things from occurring in the first place right. But the flip side is um I It’s a scary thought to think that a government would have the right to access our personal data within a smartphone at any time simply because they demand it. If you think about what is contained in our smartphones now, it’s not just the text that we send, although that’s significant, it’s not just our images, although there’s a lot of those. It’s where you go every single day, what advertisements you stopped to look at, what payments you’re making, whose in your social network and who do you communicate with. And so the idea that the government would demand that is actually you know, kind of challenging. – That’s interesting but what about the argument that people might make, is by virtue of us using certain applications on our phones or by using the phone to do banking or other things that determine our location, our transaction history, our social relationships. There could be people that argue and companies that actually make this claim, by the the virtue of the fact that you’re using our product, we have access to that data. So how can we distinguish between that situation and governments wanting that data too because it seems like we give a lot of that data willingly almost to companies, but why are we not necessarily willing to do that when it comes to governments? – It’s a good question, I think first of all, I’m not sure I agree that most people give it willingly, they kind of give it ignorantly. – There we go, yep. – And so a lot of more progressive laws including in the EU, for example, are now saying that you have to opt-in to that data sharing, rather than opting out because again from a psychological perspective, we talked about a lot in this class and others, people are lazy and we often will agree to things that we don’t fully understand, especially when that information is hard to process. I think that’s definitely true with smartphones. One of the things with smartphones is that they took the world by surprise and a lot of our behaviours evolved within that ecosystem before we really understood the consequences of those things. So we’re seeing with Facebook and other things, companies that have in many ways taken advantage of our ignorance and our laziness and so now the law once again is retroactively going back and restricting that, and I think that’s relevant because although it’s important to think that a company can monetize our data, and that’s something we should be talking more about. It’s the reason why I think most people would be more concerned about the government knowing it is because they have the ability to conscript you to even further. Right, they have the power not only to give you freedom, they have the power to take that freedom away. And so I think for many people, the idea of government, any government having free access to that type of data is kind of Orwellian, 1984 type of scary amount of data. Additional Readings 3.2.3 Case Study – Apple v. FBI: Cultural Lag
And are there different examples of countries over the last few years, all over the world that have instituted certain types of these kinds of controls and filters? And do we feel that those things are necessary? And do we feel that those are the type of things that as users, we should somewhat willing at least hand over to the government? If not, where do we draw that line? – Yeah, so it’s tricky. Like a lot of people would look at certain governments and they characterise them as authoritarian or very aggressive in terms of their policing of peoples, but the reality is, London is one of the most surveilled cities in the world. – They have more CCTV cameras. – Yeah, they have more CCT cameras, I think per capita than any city in the world. And this is true in New York, DC. I lived in DC. There’s cameras everywhere. And so it’s true to a certain extent that we’ve already given up so many of these concepts of freedom and so much private information that we don’t even realise, I think to a certain extent what the effects of that will be. And so, yeah, I think as society we definitely need to take into account the freedoms we have already given away. But now as we’re looking at these things retroactively, it’s not just about us as the data providers or the government as the eventual user, but there’s these companies in between. And, and I think the question of FinTech, it is what is the moral obligation of those companies in the middle in terms of protecting that data in terms of, and I think it comes back to a fundamental question, do we own the right to our own private data? Right, I think that’s why the distributed ledger and blockchain I think are so appealing to many people is the idea that private data is one of the biggest and most important commodities in the world right now. And yet we individuals who the data is about, we have no control over who uses it, who, who sees it, who sells it, et cetera. And so I think, these are the types of things we have to figure out as society. – And I think those arguments and debates that you rightly described that, as societies we need to figure out, I think apply not just for FinTech, which of course they do, but as well as other advances who are making technology particularly in biotechnology. So if we think about the commercialization of DNA testing and as you provide different samples to companies so they can check your family’s history or health markers in your DNA. There’s a lot of debate and questions about, hey, once you hand that over to these companies who actually owns that DNA at that point? Because that is so uniquely yours yet, can you hand that over to somebody else? And again, that’s a very other, very similar situation to what you had described before about we sign up for apps ignorantly, in an ignorant way of not understanding what rights we’re giving. Similarly in these kind of DNA tests and other types of kind of biotechnology, kind of commercial kind of projects that are going on, there’s a lot of ignorance around, hey, what are you actually giving up to these companies? – Well, that’s a great tie and actually to this Apple case, because in the state of California, right? There’s that great case where the guy, they had this unsolved serial killer case and the police for decades didn’t know who this guy was and then he takes one of these blood tests or it wasn’t a blood test, but he did a DNA test, sends it in, didn’t realise that the data that was produced wasn’t going to be private and it ended up on a public database and that data from his DNA was actually used to find him and capture him as a serial killer. So many people in society, we’re debating this issue. There’s, on the one hand, they were super excited that you have this like. – Murderer off the street – Murderer, yeah, he’s off the streets, right? And he’s been caught. But the flip side is they’re like, wait, holy wait, how did they get his information? How did they know it was him? Because he put this information up and didn’t realise that it could lead to exactly identifying him as the killer. And so I think this is the, excuse me, this is the dichotomy that we face now, is that the utilisation of smartphone technology and all these technologies has opened up so many avenues in life that we’ve never been at, communication and financial transactions and data and knowledge and so many cool things. But we are simultaneously ourselves becoming the product. And I don’t think we really understand the repercussions of that yet. Additional Readings 3.2.4 Case Study – Apple v. FBI: Accountability
So if we go back to your original question about whose responsibility then, is it to protect data, where do you fall on that? – So importantly, as a consumer, I’m increasingly realising that, number one, it has to start with me. As a parent, I’m realising that I’m trying to do a better job of educating my children about privacy and data than my parents did, not because my parents are bad, but they didn’t have to face these issues. – Challenges. – Yeah and actually studies have shown, when they’ve looked at morality and decision-making psychology, young people today are very similar in their stances on moral decision-making in almost every regard, except for one. And the one big difference today, versus say, one or two generations ago, is the perception of privacy. And young people today do not have this same standard or high regard for privacy, because they’ve grown up on a stage. It’s a public stage, right? Every day it’s Instagrammable and if you didn’t click it, it didn’t happen and so we’ve given up so much of our own privacy that it’s no longer even perceived as a moral issue anymore, because there is no other option, in their mind. So I think it definitely starts with the consumer. So I’m gonna flip this around on you though, because from an Apple perspective, Apple’s business model, now, publicly, from a marketing perspective, is hey, buy our products because we don’t sell your data, right? And this is in part true, because a lot of their revenue model is based on the hardware that they produce, right? So do you think that they actually care about data and they’re using this as kind of a moral high ground, or is it just that they know that they’re making most of their revenue off the hardware anyway and so this is just kind of a marketing ploy? – Yeah, that’s an interesting question and to be honest, I don’t think those two things are mutually exclusive and certainly, for Apple executives, I’m sure they feel like they do perhaps have a moral high ground, because of this experience that they had and because they currently, their business model doesn’t require them to monetize their data they have of users to send out to external sources by the nature of their business and the ecosystem in which users operate, since it’s almost all within Apple. – Yeah, it’s enclosed, yeah. – It’s enclosed and so at a certain point, if that business model changes, will their ability to take that moral high ground change? Perhaps. The profit incentive is, can be really powerful for listed companies. – See, this is where I struggle with this and so my history, as you know, is I used to be Apple’s outside counsel, within Asia. They had a direct phone line to me when I was still working for my law firm and I had a lot of close interaction with various people within Apple at the time of the iPhone, the iTouch. These things were first being introduced into Asia and one of the things that immediately became apparent is that the profit margin was paramount and the profit margin on the hardware, at the time, was well over 60%, right? Just imagine, that’s like cosmetics type of profit margin. And the interesting thing about Steve Jobs and the business model that he made is that it was fully self-contained because he wanted people in an ecosystem so they essentially, would have to… – just use Apple products. – Yeah, because you have to give up a lot to leave that ecosystem, right? So whether it’s the earbuds, or all these things, they can only work within the Apple ecosystem and so now, interestingly enough, you now have, say the Android model, which is the exact opposite, where it’s like, flood the market with these things, as broadly as you can, open the software… – And they don’t care who the hardware are from. – They don’t care who it is, because their revenue model is based off of the selling of the data and so I’m not so sure that Apple is altruistic or moral in this way. I think if they had built a model that could generate revenue in a way that’s similar to Google, while simultaneously maintaining the profit margins on their hardware, I think, I mean they’ve proven that profit is paramount. – And so again, kind of to what I was saying, I think. When push comes to stuff, particularly for listed companies, publicly listed, trading companies, that profit motivation frequently overwhelms any kind of moral principles that CEOs and companies may espouse to, which is unfortunate. Now, we do see certain leaders, now more and more, taking on more of an activist approach, beyond just their business, into certain aspects of political activism and having a voice when it comes to certain moral issues, which, I think, perhaps is a good thing and I think it’s necessary for such leaders to contribute that voice to these kind of debates in order for us, as users, them as the producers of these products, as well as for governments, to collectively try to think about how we can manage these issues around data and protecting data. Additional Readings Kezer, M., Sevi, B., Cemalcilar, Z., & Baruh, L. (2016). Age Differences in Privacy Attitudes, Literacy and Privacy Management on Facebook. Cyberpsychology: Journal of Psychosocial Research on Cyberspace , 10(1). Retrieved from https://cyberpsychology.eu/article/view/6182/5912 3.2.5 Case Study – Apple v. FBI: Privacy
So we we kind of fly by the fact that, the US government asked Apple, to build a backdoor in the first place. Right. And so I think many of us maybe just assumed, like, shouldn’t the very idea, that they can build a backdoor into these things, and can if they wanted to, they could access this data essentially, whenever they want, you know, assuming the agreements would allow it. What about the idea of a technology company, having that ability in the first place? There are huge markets globally, in terms of you know, smartphone technology, and other types of FinTech data, where people maybe don’t even realise, that these back doors do exist already. And in some cases, it’s almost a free flow of information, from the user to the company, and then eventually to the government. So is that even moral or ethical in the first place? – If we just kind of extrapolate from Apple, to other kind of technologies, either apps, hardware, you know, wallets that may hold cryptocurrencies, you know, almost all of these have some sort of protection, password, keywords for whatever it may be. And some of those immediately have what you call backdoor, and some of them don’t, so frequently a lot of wallets, once you lose that password, your key to get it. That’s it, you lose it and the coins, or whatever value you had in there is now gone. So more broadly do we have, Or do governments then have a responsibility to police that? – Right. – That’s an interesting question. I think if we tie this back into the Apple case, the question then is, at what point does the need for, the government to access certain information? – Security safety. – Rise to the level where the company, or privacy is then compelled to do it? Now we have lots of history, in a lot of countries, or governments frequently go to somebody say hey, I know you have this kind of, You wrote this paper, the Smithsonian paper, give us that evidence, or you videotape something, give us that evidence. Right. So that, you know, this is not uncommon, and a lot of type of criminal investigations. So there is that analogy that can be made to that. But, you know, the aspect of what we’re talking about this this data, some of which is, is still private, and may not necessarily always be directly tied, to a compelling government interest, should the government automatically, have some way to access that information. – Yeah, I think that’s questionable. But I think if we look through history, there have been situations where governments, do demand that of companies. – Yeah. And it’s, I mean, this is like the new form of national security argument, right? So before national security, was guns and bombs etc. Now, it’s knowing where people are, and it’s mass behaviour modification. So the bike sharing apps, I think, are good examples of this. Where you know, it’s not just about money, going between a customer and a vendor, is the idea of understanding where people are at all times. So my last question for you is, when you see a case like this, do you have an iPhone? – I do. – You have iPhone, okay. So when you have a case like, – And am wholy inside ingrained, – Within the Apple ecosystem, You’re in the Apple ecosystem, so good, they’re not using your data do cases like this, make you question carrying a smartphone? Not that you’re going to commit any, – That’s right. – Wrong acts or anything but, but do I mean did you ever like stop, and say or for your kids, you’ve got two kids, right? Did you ever stop and say like, do I want a smartphone in my kids’ hands? – Yeah, so this is when you ask that question. That’s immediately what my mind went to. And I think for a lot of young people, some of them may not be, mature enough in certain ways to understand, kind of the issues that surround some of these things. So like you were talking about this study, about how the younger generation, about privacy, compared to, you know, the generation before. I think it’s important that we educate our children, our young adults about the impacts, of this kind of the use of smartphones, the use of particular applications on smartphones, and technology in general. I think that kind of education, will make other aspects of forming good policy better. I think we know from different things, that we’ve touched on in the course, that informed consumers, will generally make better decisions. And if they make better decisions, then we can more fully utilise the positive aspects of these new financial technologies. as opposed to being used by them. – Would do you force your kids, to give you their password to their phone? You think? – You know, fortunately, they’re young enough. – But just in the future, do you think? – I don’t know. – Because here, because this is a microcosm of a broader question. So when we’re talking about our children, I, my children don’t have smartphones, that they carry around, but I would want to know, what they’re doing and I would see, that as my responsibility to keep them safe as a parent. And if you take that, extrapolate it out to the government, that’s exactly their point. Right? So I like your answer, because you’re saying it’s all about educating, and then letting people make informed decisions. But I think when you are in the position of authority, and you’re trying to protect people, oftentimes that kind of, desire to protect maybe overcomes, yeah. – And so my hope would be, and I honestly don’t know, my hope would be my efforts to try to educate, or effective in the fact that, I can hopefully trust them enough, where they could use this technology initially. And if there’s potentially an issue, then we may have another discussion about the use. – And then we’ll hack their phone. – That’s right. And then we’ll ask Apple to hack the phone. But I think a nice way to wrap this up, and to kind of get to the complexity of this issue, there was a quote by a guy named General Michael Hayden, who was a former director of the National Security Agency, United States, as well as the Central Intelligence Agency, and with respect to this whole situation with Apple, in the San Bernardino terrorists case. He commented in a report that this case, this may be a case where, we have got to give up some things in law enforcement, and even counter-terrorism , in order to prevent to preserve, this aspect of our cybersecurity. And so, he, I think he captured that well, in the sense that there’s a balance of national interest, but there’s also a competing interest, of how do we want to secure data like this cybersecurity, and this is an ongoing debate, that I think will continue to have and hopefully through, our students thinking through this course, and these questions can also contribute, to debate wherever they may be. Additional Readings 3.3.1 Case Study – The Sony Hack
In an age where data is supposedly the new oil, FinTech companies have raised serious concerns about data protection and compliance, especially in light of the recent spate of global cyberattacks as the presence of valuable personal information makes FinTech companies increasingly attractive targets for cybercriminals. Okay so, let’s dive into another story. On Monday, November 24, 2014, a typical week begins at the Sony Pictures Entertainment’s headquarters in Culver City, California – right next to Los Angeles. As employees begin arriving work they realize that this is far from an ordinary work day. The image of a skull flashes on every employee’s computer screen, accompanied by a threatening message warning that “this is just the beginning”. The hackers, calling themselves the Guardians of Peace, go on to say that they have obtained all of Sony’s internal data”, and if demands are not met, they will release Sony’s secrets. And because of the hack, the whole Sony network was down, rendering the Sony employees’ computers completely inoperable. The hack had brought the global corporation to an electronic standstill. On November 27, the hackers leaked five upcoming Sony films online. The is the first of what were to become many subsequent leaks in the days and weeks to follow. Speculations began arising that North Korea may be responsible for the attack, in retaliation for the movie The Interview which depicts an attempted assassination on North Korea’s leader, Kim Jong Un. Back in June, when the trailer was first released, North Korea had called movie an “act of war”, saying that it would carry out strong and merciless countermeasures. About a week later, the FBI officially began an investigation, and Sony hired a cyber-security firm to carry out an investigation of the attack. In the following days more leaks are published online, including the salaries of top-paid executives and more than 6,000 employee names, job titles, home addresses, salaries and bonus details. And reports also arose that Sony was fighting back, using hundreds of computers in Asia to execute a “denial of service”, a so-called DDOS attack, on sites where its stolen data were being made available. On December 7, C-Span reported that the hackers had stolen 47,000 unique Social Security numbers from the Sony computer network. With this data being leaked on the internet, other cyber criminals instantly swooped in – leading to various fraud, theft and other problems for Sony’s employees. On the same day, North Korea denied all involvement – but called it a “righteous deed of the supporters and sympathizers of the country”. And beyond just coping with the cyberattack and the various leaks, Sony were also challenged on other fronts, such as by former employees filing class-action lawsuits against the company which they argued had taken inadequate safeguards to protect personal data. And Sony also faced battles with the media, demanding the media to stop reporting on the stolen data, claiming that journalists were actually abetting criminals in disseminating the stolen information. On December 16, Sony hackers threatened a 9/11-style attack on theatres that showed “The Interview”, which led to theatres across United States cancelling their premieres, and Sony pulling all TV advertising, for the movie. Urged by President Barack Obama, to not give in to the hackers’ demands, Sony instead jumped directly to a digital release. On December 19, the FBI officially implicated North Korea in the Sony hack. North Korea proclaimed its innocence and in the following days, heated rhetoric emerged from both countries. Now, other security experts had some doubts about whether North Korea was actually involved in the hack. Another theory puts the finger at angry former employees, whereas others say it was the work of outside hacking groups that simply used the release of The Interview as cover for their actions. Now the challenge that we have is that, the Sony hack was not a single anomaly, as we are witnessing a huge influx in data breaches across the world. Now just to give you a few examples: In 2013, 40 million credit and debit card records were stolen from Target. And, just before the Sony hack, 56 million credit card numbers from Home Depot customers were also breached. In 2017, some of the biggest companies in America were also hacked, such as Yahoo, Uber and Equifax. In the case of Equifax, the hack compromised the data of around 143 million Americans, that’s about half of the US population and well over half of the adult population. And the hackers had gained access to over 200 thousand credit cards. And in 2018, we know that Marriot had a data breach affecting 500 million guests. So, with all these massive data breaches globally, important questions naturally arise around our key principles of trust, proximity, accountability, cultural lag and privacy. Like, who owns your data – and who is protecting it? Can you trust them? How may data protection be regulated? With recent technological advancements, are we able to protect our own data and privacy? We’ll discuss these questions, further, with you, in our next session. Additional Readings 3.3.2 Case Study – The Sony Hack: Trust
Okay, so I mean it’s interesting and that allowed people to maybe view Spider-Man a little bit early, but what is the connection between this and FinTech? This isn’t really like a FinTech case. – So that’s a great question, I think this case, the Sony hack leads into kind of broader questions about data and security, and I think those are things that we wanna talk about on the context of this module. But I think one, in terms of FinTech, we like to think if something has ‘crypto’ in front of it, somehow it’s maybe more secure than other forms of finance or data or other spheres of finance that we may be involved in. – It’s like we don’t understand it so we assume other people don’t too. – Perhaps, perhaps, right. – Yeah, yeah. – We have cryptocurrency, Bitcoin being probably the most representative of cryptocurrencies at the moment. And even then we know that participants related to the cryptocurrency market have been hacked, right? So probably the foremost example of that is there is an exchange called Mt. Gox that was based in Japan, at the time it handled most of the cryptocurrency transactions of the world, they ended up being hacked and losing their Bitcoin valued at billions of dollars and eventually they went bankrupt. And so that’s a very direct example of how cybersecurity, still are very relevant, even to things in FinTech that we think may be secure. Right, so that’s the first point. I think the second point is, a little bit more broader in the sense that as FinTech, and different types of applications of it become much more widespread, and populations that maybe didn’t have access to traditional forms of finance now do through not having to go to a brick and mortar bank but accessing banking services through their phones, right? – Yeah, yeah. – You would assume that a lot of these populations maybe are not as technologically sophisticated. And so as they get exposed to these new technologies, their concept of cybersecurity and how to protect their data will become an issue too, and they can potentially be a population at risk in terms of hacking and cybersecurity. So this is why this is a very important topic that goes hand-in-hand or in parallel to advances in technology and FinTech. – So one of the things that I think often comes up is when these types of things happen, it’s who owns the data, but also who’s responsible for this. So in the Sony case, what happened to Sony? Did they get in trouble for this at all? Was there any liability on their part? – Well there’s no criminal liability that we know of, but we know from a civil liability standpoint meaning somebody filing lawsuits that there were a number of lawsuits against Sony saying, hey, you should have been more responsible for how you protected that data than you were. – So is this customers, employees, shareholders, all of the above? – So I think, my understanding, the majority of the cases that were brought against Sony were generally former employees who probably had much more data with Sony because they had employee records and different personal information that ended up getting exposed. But if you think about it, that information is about you individually, or individual people, but it’s being held by somebody else, so who actually owns that data? – Right. – Does Sony own that data? Do you own that data if it’s about you? Because that idea of ownership then links into idea of responsibility, which then links into idea of protection. And then understanding that gives us a more comprehensive approach to trying to figure out who actually has a responsibility to protect all of this data. – And it doesn’t seem like, either from a regulatory standpoint, certainly from an ethics standpoint, we haven’t really answered those questions yet, right? – Really quick on the ownership and liability, it’s very interesting that in many situations, certain kind of social media, social networking services, that users will post different types of personal data, be it pictures, be it stories, be it videos, frequently these social media services, these social networking services will actually say they don’t own the data. – Yeah, yeah. – But they will say that they are licensing the data- – They don’t want the responsibility of ownership. -and then that creates all kinds of questions, well if we own the data now, then do we have to pay you for that data? But, so frequently the way they navigate this somewhat thin line is you still own your data but you’ve licensed it to us by virtue of you using our platform. And then in that situation, they can use it like they have owned it, but maybe they don’t have the same responsibilities as protecting it. And so this again raises a number of questions. Additional Readings 3.3.3 Case Study – The Sony Hack: Accountability
So one of the challenging things with this type of a data breach is it relates to time, right? It’s often very difficult for the parties involved to know when they were hacked and then after the fact, it often takes time for them to then react or even publicly, you know, tell people that the hack occurred, right? So how does that like impact these scenarios? – So time is a really interesting variable when it comes to these cybersecurity matters. So like, as you mentioned, frequent companies don’t know or only know later that they’ve been hacked. So then at that point, if something happened many years ago– – So it’s not like in TV, where it’s like, I’ve been hacked and all the lights are gone. – That’s right. – Everyone’s typing on the same keyboard at the same time. – Well, I think maybe in certain situations that could happen, I don’t know, but I imagine in a lot of situations, you know, a company has been hacked or data has been exposed, either intentionally or unintentionally, and they might not know about it for a prolonged period of time, and I think we’ve seen a lot of examples of that even the Mt. Gox situation that we talked about a few minutes ago is a situation like that, where the hack might have happened years before, and so, there’s a lot of uncertainty around this time element, particularly when did it happen? But then, on the back, let’s assume the company has found out, a day later, the same day, whenever it is, then how did they react? I mean, it seems from some studies that on average, companies take at least six months to react and kind of figure out what their next step is. One challenge that companies have is that each of these companies that goes through this has very different capabilities. So certain companies because they may have really good management processes, good leadership, good operational control, and, you know, teams that can, you know, some sort of emergency situation comes out and they have some sort of protocol that they go through. But there are a lot of companies, the reality is most companies are probably not really well managed, and maybe don’t know what to do when that happens, and then you have very interesting incentives that are particularly for listed companies. Companies that are publicly traded, that have this kind of issue, and, you know, debates that occur probably at the highest levels, both in the boardroom as well as amongst the chief executive officer level, about when they should reveal certain information should it be before a certain deadline in terms of quarterly cut-offs and things like that, because maybe they don’t want to impact share price, and, you know, – Or their job. and so there’s a lot of incentives, or disincentives that go into wanting to publicise or not publicise the information as well, and so this is a great challenge that we have, and then it goes back to this again, going back to the idea of who’s in charge of protecting this data then, right? Because if you’ve discussed, somehow whatever method the company has aggregated or compiled this data, then do they have responsibility or stewardship over that, right? You know, I think from an ethical perspective, we would say, yeah, right, if something has been left in your care, then you would assume that there’d be some level of responsibility to protect what has been left in your care. Now, it seems that that’s not always the case from the behaviour of business leaders. – Yeah, and one of the things I mean, let’s assume that North Korea was involved, let’s just say, and it’s not clear that they were, this is one of those cases that when it happened, you know, it kind of brought home to me this idea that personal data, is in many ways as important to national security as a border might be, and I had never really thought about that before. So what are kind of security implications from a data standpoint? – Yeah, so if we take a step back, you know, there are a lot of people who feel data will be the fuel of the not just FinTech but of perhaps the Fourth Industrial Revolution. So you know, people talk about all the technological advances– – AI with lives– – That’s right, all of this will be empowered or further enhanced by large amounts of data, and so at the core, you know, companies, large technology companies in United States and in China, other places a world, you know, a lot of them are branching out and building very large platforms, where users are participating on the platform through various different services that these companies offer, but at the heart of all of that, is that these companies are now having the opportunity to get a more fuller or comprehensive view of usage of data, and more richer data that can be used to kind of develop new products, but as well as develop profiles of people. Now, we already know that in China, for example, that at government level, they’re trying to develop social credit, so tie into your question, that has very direct implications on aspects of how that credit or that data then may be used. – National policy. – That’s right. – There’s examples of this where government officials have said they’ll use this in determining visa rights– – Who can leave the country and who can’t leave the country– – What jobs you can get, what you can study in university whether you can be a journalist, a lawyer, et cetera. Additional Readings 3.3.4 Case Study – The Sony Hack: Cultural Lag
Okay, one of the issues on the ethics side that we typically lump in is the regulation of this, right? And so part of the issue with data and data protection is that globally, there are different standards everywhere, right? And the nature of this data … Okay, again, we keep coming back to the idea that money is not in a vault anymore, right? It’s code somewhere, and so it’s information going in and out of servers. And when the data leaves a jurisdiction, it’s not like it’s physically leaving, right? But the server is maybe hosting data for someone in Hong Kong, in the Philippines, or travelling, the information could be travelling via lines, through the U.S. system. What do you think is the responsibility from a regulatory standpoint for a consolidation? How can we, as society, have standards for these types of things when you have this spaghetti bowl-like mixup of regulations globally? – Yeah, so that’s a really great question. The reality is, I don’t think anyone has a great answer to it. On one hand, the way the U.S. tries to extend their regulatory reach is basically there’s a number of laws, financial regulation laws that basically say if you use U.S. dollars for transactions– – Every bank does. – Which almost every bank, every country, large company in the world has to do in some way, then somehow, because of that, you are touching the U.S. financial system, and if you’ve committed some sort of crime, or that transaction is part of a larger network of maybe illicit transactions, then you’ve maybe fallen into U.S. jurisdiction. So there’s a set of regulations that get to this. Now, but like you’re saying, what if you’re actually not even using currency, and how does that work? So that raises a broader set of questions, as well. I think, by its nature, if we think about cybersecurity as well as cyber regulation, by their very nature, those are reactive things. They will always be reacting to what just happened. And so it’s very difficult to put bright-line rules in that say, “Oh, ABCD,” and as a result, I think as users, and consumers, and people who will be impacted by these advances in technology, we have a responsibility to kind of think within an ecosystem of the values and principles that we might want to abide by. – Yeah. – Because I don’t think we can … Continually, what we’re going to find, we can’t rely on law, and we can’t rely on governments per se, to be at the forefront of leading how we want to govern this aspect of the problem. – See, but this is the challenge, right? So, it’s not like a typical negotiation scenario where I’m going to buy something from you, and then I get the chance to say, “Well, I want the price “to go up or down,” whatever, right? Every single day, we click on potentially hundreds of websites where we are agreeing to their privacy policies. Sometimes you formally have to agree. A lot of times, it’s hidden behind the scenes. You’re not even paying attention to it, right? And so, on the one hand, that diminishes the value of those things, so essentially they’re pushing that burden on us, as the consumer, to say, “Do you agree to this or not?” But the challenge is it’s not like anyone is taking the time to read and understand those things. And then even if you did, it’s still not like you have the opportunity to negotiate. It’s not like you can, say, go to Facebook and say, “Okay, clause three, line number two, “I don’t think this is appropriate, “so let’s work that out”– – Yeah, the negotiation, it’s if you don’t want to use it, then don’t use our service. – So, same with the banking system, right? The financial system. Either you’re in or you’re out. And so it’s not like we really have a choice. So even if consumers wanted to have a choice, it’s either do you opt in, or are you gonna eliminate yourself from this entire system? – Yeah, so that’s very interesting. And so we can see some analogies, or similarities, to maybe a few other types of situations. So for example, in financial services, in financial services space, particularly in the world of derivatives, we have organisations that kind of market participants who got together to set a set of ground rules of how they want to transact with one another because they didn’t want to have a lack of clarity or grey area, or didn’t want to wait for government or law to come in and say, “This is how it’s gonna be.” And so I think, from the consumer perspective, you’re right. At the individual consumer level, we don’t have a lot of individual kind of influence. But I think collectively, there is some influence. Similarly, I think what we want to try to invite companies to do is to have these discussions amongst themselves as industry participants, as market participants. How do we want to create a fairer, more secure ecosystem for the product to be? Because ultimately, this is a very long-term game. But if they don’t have that discussion, then, in the long run, it will just become more problematic. – Yeah. Additional Readings 3.3.5 Case Study – The Sony Hack: Privacy
Start of transcript. Skip to the end. Getting back to the ethics of this, the saying goes that, “If you’re not paying for a service online, then you are the product.” – Product, that’s right. – Right. Yeah. And so– – Which is a great line, by the way. – It’s a great line, yeah, my students say all the time, you know, “We use this because it’s free.” I’m like, it’s not free. – You’re the product. You’re the data basically. – Yeah, exactly right. So the business model is now no longer even that thing overtly, it is the data they’re collecting behind the scenes, and therefore what they’re doing with the data behind the fact. Is there any consensus about the ethicality of that as a business model, especially when it’s often kind of hidden from the consumer, especially children for example, like a lot of games are free, so candy crush, those type of things, they’re free, they use the same psychologist that created the gaming system within, say casinos for example, to get your mind wrapped around one thing, I gotta do one more, I gotta do one more right. Is that somehow, kind of pernicious or unethical, or is it just an extension of you know, people’s weakness. – Well, I think that raises another great question. So we know, most applications that require, that basically monetize off of data or ads, and that require active users. Embed a lot of psychology, – Yeah. – Into user interface, – Totally. – Into what information comes into your feed, because overtime, they’re mapping the things that trigger you basically. – And, just to clarify when you say using psychology, essentially you’re saying, using the weakness that they know that exist within human behaviour collectively, – That’s right. – In order to keep us there. – From a behavioural science and psychological perspective, we know that we are less in control, than we often think we are. – Yeah. – Right, and there certain triggers, colours for example, information, sounds, that tend to have influences on people’s behaviour, and a lot of these companies that, particularly social networking sites for example, you know, spend a lot of time actually actively thinking about this, to ensure that users spend as much time as possible, on their site. Because as they do that, they’ll use it more, they collect more data from that and then are able to feed into the model again. – So my last question about this from a data standpoint is, we’re talking about these implications for us, what does this mean for the next generation, especially from an ethic standpoint because, one of the things that we’ve discussed is that, the only major difference that they’ve found kinda between previous generations and this generation from an ethic standpoint is their perception of privacy. – That’s right. – And they’re living in a world without privacy, essentially right? – At least in a way we would’ve thought when we grew up. – Sure exactly. And so, you know, how do we perceive the next iteration of this, do we think that with distributed ledger technologies and other blockchain technologies, will we be able to control our own data, own our own data, kinda determine what people see, or is this just going to be a new way to solidify this power of the data. – So that creates a very interesting dichotomy in terms of the future, because on one hand, there’s a big pursuit. Blockchain, and other technologies are in some respect, more anonymous, right? Even though they’re open, they’re also more anonymous in terms of kind of protecting– – As anonymous as the system wants them to be. – That’s right. And so, in some sense is some of the FinTech technology, technologies that we think about now actually creates, some greater levels of anonymity than might’ve existed in traditional financial system, but on the other hand, there’s a lot more information that was private that is now public as well. So it’s a very interesting dichotomy that people would have to live in as they get older, and I think when we think about our students, and our children as they grow older, they’ll live in a world that’s definitely less silo so thinking about, oh this is a bank, this is a consumer company, this is a store, those kind of distinctions I think will start, blurring as you alluded to. – Yeah, so one of my favourite kind of fake news clips of all time was from this website called The Onion, and they did a story, this is like you know, ten or more years ago, and so it’s very prescient in nature, but it was talking about Facebook, and they revealed, fake, this is just a joke, but they revealed that Facebook was actually a CIA protocol, that was a secret programme to get people to post their private information in a public way, and they were joking about it because they were saying like, they called it Operation Overlord, I think, and that the leaders on Facebook were actually CIA operatives, and the idea was they had been working as a, you know, an intelligence agency to get private information on people for so long and then now they realised that everyone just post it anyway and they have like logs of where they’re going and whatnot right? So the idea, obviously again as a joke, but the idea being that you know, we live in a way, in a society where so many things are open and even the concept of privacy as you said, it doesn’t even mean the same thing that it used to mean. – Well and to tie that back into something you mentioned about national security, so the Facebook example was perfect for that because we know in the lot the most recent US presidential election, there seems to be a lot of very clear evidence that there were certain elements, that tied to various entities in Russia that kind of used Facebook as a platform to try to influence certain election outcomes in the United States. And so, that is very much this idea of weaponization of data. And to influence outcomes that have very important national security considerations, right, who will be the leader of an important country in the world. And so we’re seeing that. – So the same use of data, that can make it easy for a large retailer to send you a personalised coupon, is the same analysis of data that can also convince you to pick a certain candidate and what is supposed to be a democracy. – That’s right. – This is a challenge. Additional Readings Module 3 Conclusion
In conclusion, after all the stories about cyber crime, illegal use of cryptocurrencies, hacking and breaches of data privacy, many people, unfortunately, connect the rise of fintech with only bad things. They’ve lost trust in the institutions and innovators who are driving these changes. And to be sure, there have been a lot of scary stories that require immediate attention. But it’s also true that these new technologies can change the world in so many positive ways. So, what do we do? – Well, once again, society has a choice to make. From a proximity standpoint, these concepts may seem so distant that we don’t really take the time to understand or even question them. For example, we accept the terms and conditions of websites, like iTunes and eBay, so often that we have become desensitised and don’t really think about the potential future implications. Be honest, how many of you actually read those? And for innovators, they’re often so distant, or non-proximate, from the users that they can’t empathise with their concerns about data privacy. – We have this seeming paradox that pits our legal rights of personal privacy against the vast efficiencies and desirability of fintech innovations. For example, most people love their smartphones, and even those who don’t really love them, are reluctant to give them up because they’ve become so integrated into our lives. – But after a period of culture like, we are all now becoming aware that by carrying around and using our smartphones, we are giving up some aspects of personal privacy. And we love the idea of being safe and secure, particularly from violent terrorist attacks. But when law enforcement asks large tech firms to decrypt our smartphones, that can be quite unsettling. – But, has the area of privacy already passed? Have we already given up so much personal data, via social media, and our Google searches and purchasing habits, that these questions about privacy are moot already? And from an accountability standpoint, maybe you think the big tech firms and banks are so big that you can’t do anything about it anyway. I know that I have become so numb to the announcement of large data breaches, that I don’t even really think about them much anymore. But that probably needs to change. – In fact, maybe the opposite is true. Maybe since we are now more exposed than ever, giving up significant personal data on a minute-by-minute basis, we actually need to have even tighter regulations and controls on the firms who are collecting, using, analysing and sharing our data. – Now here’s the part that many of you may not yet realise. The fact is, that in many ways, we are not only the consumers, but are in fact the product that these large companies are trying to monetize. How do companies like Facebook and Google, which allow us to use their main services for free, make money? Data. Our personal data is what drives revenue at these companies and many others. – So what should we do? How do we strike a balance between balancing our privacy and ensuring sufficient security and data protection? And who should be accountable for cyber crimes, data breaches and other illicit uses of fintech innovations? – And what we are seeing now is only the beginning. As 5G connections and quantum computing become more common, data collection and analytics are only gonna increase, driving the next iteration of machine learning and artificial intelligence, which you’re gonna focus on in the next module. Module 3 Roundup
– Welcome to our roundup for week three. Can you believe we’re already halfway through the course? Now, we mentioned this last week but it bears mentioning again. We really appreciate all the active participation in the discussion board. It has been really dynamic. I mean, the quantity of the comments has been great, but more really the quality of the insights and experiences that have been shared have really impressed us. We’ve been really blown away and there’ve been definitely a few times where mutually we’ve thought, wow, it’d be really cool if we could build on the discussions in a live classroom. – And we’re also really grateful for those of you who may have joined the course a little late but are not any less enthusiastic in sharing your thoughts, experience, and opinions with us. This course is really meant to be a continuous discussion, so wherever you are right now, please take your time and we’ll try to respond to some of the newer comments in the earlier modules from time to time. And that being said, we also really highly recommend that you read and comment on other people’s posts and take advantage of the full learning community. And, as I said in some of the feedback, the course is really only as good as the learners who are taking it. So we really do appreciate you and thank you and ask you to keep contributing your unique experiences and help further enrich the course. – So, we covered a number of really entertaining but also important cases in module three, which we hope compelled you to think through the implications of new technologies and how they intersect with crime and security. From the comments in the discussion forum, it seems many of you have been thinking about really similar questions too. So we want to spend some time addressing some of the great questions and contributions that were made. – But before we jump into that, just a quick update on enrollment. So, we’re over 5,000 students now, which is already way more than we ever imagined. And from the feedback we received from many of you, it seems the course has been informative and interesting. So, if so, please consider sharing it with your friends, colleagues, family, within your organisations, because we really genuinely believe these questions that we’re considering in the course are crucial to crafting a better future. Now, with that out of the way, on to the comments. – So, RichardStample had another great comment this week, which is becoming a pattern, very consistent. He had a great comment about, hey, the difference between something that is retroactive in the law perhaps versus reactive. So maybe Dave you could take a crack at that and just share your thoughts. – Yeah, so, first, I was actually really impressed. I made a mistake when I was speaking in that part, it was during one of the conversations we were having and kinda riffing back and forth, and I said that the law is retroactive and then kind of immediately caught myself and then said reactionary. But those are two actually distinctive and important aspects of the law. So, retroactive, if you’re not familiar, means that, using it in the legal context means that if you create a law, then it begins and is put into force at an earlier time. So let’s say, as of today, there’s a new law that says taxis are no longer legal. And if it was retroactive, you could say, and the date takes effect from January 1, 2019. And so therefore anyone who actually was operating a taxi service from January 1, 2019 would have in some way violated the law. There is this legal concept and it does happen, but typically a retroactive law in this fashion would be something that’s more positive. So, amnesty, for example. If you enter a country illegally and if you’ve been here for a certain amount of time, then they could say, as of this dates, people that have entered, or sorry, if anyone’s entered the date after this time then they’re retroactively kinda forgiven. So, what I meant to say, though, and what the conversation was really about was how the law is reactionary, is reactive, meaning that the law tends to… We tend to create laws in order to solve existing problems after they occur. And this is good because we don’t, if you think of a minority report standpoint, you don’t wanna make people, punish them for crimes they haven’t committed, and you don’t want the law to kinda predict what is going to be happening. That’s not what the law is for. But what that also means is if we’re always reactionary, if we’re always reacting to things that have happened in the past, then from an ethics standpoint it means that often you can have criminals or bad actors or just normal people doing what is technically legal but maybe a little bit unethical, and then the law is never going to be able to stay ahead of that. So, appreciate that, for pointing that out. It was something every time I listened to that segment I would always cringe a little bit because I knew I made a small mistake. But it is an important concept of the law and it gets into kinda cultural lag and why the non-material aspect of the culture like the law is very slow to change whereas the material aspects of culture like technology changes very quickly and there are often gaps in between the two. Okay, so the second comment that we wanted to point out is from joergHK. Again, a frequent commenter, we really appreciate all of your additions to the course. So, one of the things, it’s two comments in one basically, and he said that, he asked, he said, “Please don’t move fast and break things.” And for those that are not familiar with where he’s coming from, this is actually kind of a modification of a statement that was made popular by Mark Zuckerberg who said that, you know, in Silicon Valley– – So, the founder of Facebook. – Exactly, the founder of Facebook. He said, “We move fast and break things.” That was kind of the mentality of Facebook and has been adopted by many startup founders in the Silicon Valley region. And so joergHK was saying, again, bringing cultural lag into this, he was saying, “Please, just take a minute, slow down.” Break things, disrupt things, sure. But let’s take some time and make sure that as we’re doing so, we’re doing it in a way that’s kind of thoughtful in that regard. He also talks about, though, how there are some ways from a regulatory standpoint that governments and institutions can advance technology forward while maybe minimising the risk of the breakage, the disruption. And he mentioned something called a sandbox. And so I wanted to maybe ask you to kinda describe what is a sandbox, specially from a fintech standpoint, and how are they being applied. – Yeah, so, sandboxes are interesting. They’ve kind of become a little, come into Vogue in a sense in a lot of places in the world as financial markets try to understand how we’re gonna cope with these new technologies that come in with respect to current regulations. Because current regulations were made in the context of kind of a traditional market structure, and there’s aspects of new technologies, like fintech, that will come in and change how that happens. And so some creative regulator somewhere, I’m not exactly sure, said, “Hey, let’s have something called a sandbox.” This is not like the toy that you played in when you were little. – Although that’s what it’s named after. – But that’s what it’s named after. It’s this idea of let’s wall off this space and allow these innovators to play in this space, not subject or constrained by certain regulatory measures, and let’s see what happens. – Yeah, give them their toys and let them play and see what happens. – And that will give us indications of perhaps how we should regulate certain behaviours. But if we put current regulation on them, they may actually not be able to grow and it may not be applicable, but we wouldn’t know that because they’re gonna be constrained to begin with. And so by putting in a regulatory sandbox, that allows these kind of new companies that are kind of on the fringe of certain regulatory rules, it gives them an opportunity to expand a little bit, as well as for regulators to observe what happens and how that occurs. But at the same time the observation thing is important because they may not be subject to the current rules and regulations but they in theory should be observed. The effects, the impact that they’re having on customers in particular, how is that working. So one thing we were discussing in a class we had earlier today, a live class that Dave Bishop and I had today related to fintech was the success of regulatory sandboxes in particular jurisdictions like Singapore and Asia. And one of the things we thought was really great was that Singapore it seems it has coordinated a number of different policies in conjunction with their sandbox initiative, even from a few years ago. I remember hearing about what Singapore was doing a few years ago. In Hong Kong, we’ve only recently gone down this regulatory sandbox route and I think we’re still trying to coordinate this a little bit more with broader policies and different things that regulators are trying to do. So I think that’s quite an important, we think that’s quite an important piece to have if you really wanna cultivate innovation. Because if you have people in companies that put out new products but if they can’t test that in a neutral way not subject to kind of the same regulations that a fully licenced and staffed brokerage firm or bank would have to subject themselves to, then that can be very onerous on these new innovators. – Yeah. Great question, thank you. Or great point. – So, one of the other awesome comments that we got in the discussion board this week was about privacy. So, CelesteMunger, I think, it looks like she’s from Canada, talked about her thoughts on privacy and she thought, one of the things that she started off with was that privacy was an illusion. – Privacy is an illusion. Period. – And then she ended with an example of DNA testing, which is another large area where a lot of privacy concerns have been raised recently and will continue to be raised, particularly in the biotech space and technologies related to genetics and things like that. So, on that note, what do you think about these issues of privacy? I mean, they’re super important, but how do we think about them? – So, she is probably right to a certain extent. Privacy is an illusion from the standpoint of a fully traditionally private life because we are constantly, as she pointed out, being recorded and we are ourselves giving out significant information. But at the same time I’m not sure that’s what the definition of privacy means from a rights standpoint. I think if you think of the right to be forgotten, if you think about the right to be able to pull back your information, if you think about the right to be able to do what you want in your own home, which really is fundamental to many other rights in terms of human sexuality and having children, there’s so many aspects of that, it doesn’t mean just because we don’t have as much privacy in our lives as we go out in the public doesn’t mean that it’s necessarily eroded privacy as a right. And so I think this is the part that we as society have to do maybe a better job of really thinking through. As peter-nyc pointed out in a previous module in one of his comments, the concept of privacy, specially privacy as a right, is in and of itself a relatively recent legal construct. It only started happening a little over a century ago, and even up until the 1970s– – In the United States. – Well, correct. – And then globally later. But we’re taking about a US legal context. – Yeah, in the US legal context, it really started 150 or so years ago but really wasn’t institutionalised or even codified until the 1970s, actually, when some US Supreme Court cases, including the famous Roe v. Wade which dealt with abortion rights, where they said there was an implied right to privacy in the US Constitution, and so within the United States and other primarily Western democracies there was this codified right to privacy. And so, in that context, in those nations where that right still exists, I think we do still have a very strong right to privacy, although that does seem to perhaps maybe being eroded slightly. So I think there’s a distinction between the legal right to privacy versus how much information about us is kind of flowing out on a daily basis. And so it’s complicated but it’s important to be able to parch those things because as regulation comes in we want, I think, I should maybe speak for myself, I want more regulation on dealing with my private information, but that’s more– – That might not be a technical right to privacy. – Exactly. That’s like personal information that I wanna make sure is being used responsibly and that I understand what’s happening with it. But it’s separate from my overarching constitutional right to privacy which means when I’m in my own home I can do what I want, that type of thing. – I think that’s an important point. That potentially the link between traditional forms of right to privacy, which I think initially were like, in your own home, people shouldn’t be able to just come in and see what you’re doing. And that linked to people shouldn’t just be able to come in and look on your phone. Perhaps there is a link there, but I don’t think that link is traditional in a sense. And maybe that will evolve over time. But we tend to use the vocabulary, a right to privacy, in various forms and I think, to your point, that has evolved over the last century or so in what form that takes. So, early on it was about what you’re doing in your house. But even then, certain activities, physical activities, sexual activities, weren’t necessarily protected for many centuries and decades in America. And then that changed. And then when we get to women’s rights, that idea of what I do with my body, is that a right to privacy? What right does that fall under? Because in a lot of situations these are not explicitly stated so they’re inferred rights. And so, again, this will I think continue to evolve given how technology is evolving. – Yeah, and I do think it’s interesting because although DNA is not a fintech technology, it is a very interesting example that she’s provided. For those that are not familiar, just to give you an example of a case that happened in the US within the last few years. There was a gentleman in California who did one of these private DNA-testing services. You scrub the inside of your mouth, you put it into a vial, you send it in, and then they provide you DNA information about yourself. And maybe what he didn’t realise at the time was that as part of the user terms of service, you also agree for that DNA information to be uploaded on a public website which then becomes, I guess, I don’t know all the details, but public domain or something. – Yeah, usually. Because I actually did get my DNA tested on one of the commercial providers, I don’t know, a few years ago. It serves a lot of different purposes. It’s interesting from a genealogy perspective, you can kinda see where your forefathers came from, you can see, there’s health indicators that can be helpful. And there’s some questionability about how super high accurate they are but it gives you a general sense. But yeah, one of the things I remember as I was doing research was they take you through a series of terms and conditions and they basically say, “Would you allow your data “to be included in certain databases “that will be used and tested and whatnot.” And I always opted out of those because I was kind of aware of those issues. So, basically, the fundamental question is, where will that end up eventually? And you actually don’t know that, it’s not clear to you. And until that was clear to me I didn’t want to participate so I opted out as much as I could. And I think, to your point, this person didn’t do that, which ended up– – Do you know what happens? – Yes. – Okay, go ahead and finish. – Well, so the FBI apparently was looking for a, I don’t know if it was the FBI, but police authorities were looking for someone who apparently had killed a number of people. – A serial killer, yeah. – And they had some DNA evidence and through basically linking of genealogy, so genetics of family trees, they were able to figure out, oh, this person was probably related to this person, and eventually figured out it was this particular individual who had actually provided his own evidence himself that ended up leading to his arrest. – Yeah, so it was really kind of amazing and yet scary at the same time. So a lot of people that read this story were like, “Wow, that is so cool.” – Like CSI, TV show. – Yeah, CSI. I mean, these cases had gone cold and I think in the 1980s, so it’d been 30 or more years. The idea is the killer is long gone, there’s no way we’re gonna find them. And then boom, you’ve got him. But then it’s like, oh, wait, wait a minute. This guy sent in his vial of DNA, he was not expecting this to be ran through a criminal database. And so, again, there was a good outcome, you found a serial killer, but I think it caused a lot of people to think, now, wait a minute, what’s gonna happen 10, 20, 30 years into the future? What if they want to, whatever, because of ideology or race. – Yeah, and this becomes one of these double-edged swords because I think probably when we were in law school, a number of law schools in the United States got involved in Project Innocence, where they were basically trying to represent people who they thought were falsely imprisoned. And one of the ways that they were able to help a lot of these people that were incarcerated, usually minorities, socially-economically very kind of disadvantaged, was through the advances in DNA technology. So, oh, actually this evidence that you have is not this person. And then they were able to free a number of people. So, again, of course you don’t want people to be incarcerated wrongly, but at the same time, you can see a lot of different situations where a proliferation of this kind of data where it becomes commercialised or commoditized and then it ends up in the hands of actors who are using it maybe not for nefarious purposes but it’s for profit, then it ends up becoming a problem. So you can easily think of people that need insurance, yet an insurance company getting genetic markers, and even if you’re not sick they say, “Well, you’ve got an X percent chance “that you’re gonna get sick with this disease “so we’re not gonna insure you.” So these are not the type of necessarily, I don’t think, the outcomes that we want. Or at the very least these are the type of things we wanna think about before just wholesale, let’s open this up. – Yeah, so, again, like many things in the course, double-edged sword. There’s a lot of benefits that can come from this, probably some unintended negative consequences, and so we need to be very thoughtful about these things as we roll them out. – Our heartfelt thanks again for your participation and contributions. Putting this course together definitely was not easy, a labour of love with the emphasis of labour. But your enthusiastic engagement has really made the effort worth it. – Now in some ways the next module is really our favourite. In module four we will explore artificial intelligence implications, which is and will only become more relevant in the future. And we’re sure that many of you are already thinking about artificial intelligence in some way, and we hope the content is interesting and really look forward to your thoughts and reactions. So we’ll see you next week. Module 4 Artificial Intelligence and FinTech 4.0 Module 4 Introduction
rt of transcript. Skip to the end. So, welcome back, we are halfway through the course now, and now you get to celebrate: So imagine that a friend calls to inform you that she has won two tickets to a concert with your favorite musician performing. The concert is this weekend and your friend invites you to use one of the tickets that she has won. Wow, what a great friend, huh? Now you are super excited and can’t wait until this weekend. As you and your friend enter the lively concert venue, you notice an impressive kiosk covered with multiple flatscreens showing video footage of your favorite musician performing. So you stop for a few minutes to watch some of the videos cycle through and now you’re really excited for the concert and head to your section ready for a great show. Like you and your friend, thousands of other concert-goers also stopped at the kiosk to watch videos in preparation for the concert. However, what neither you, your friend, nor the other concert-goers realized was that while all of you were watching videos, cameras embedded in the kiosk were also watching and taking photos of you. Your image along with most of those other fans that stopped in front of the kiosk were captured and analysed by facial recognition technology. You see, your favorite musician has a number of stalkers that have made various threats over the years, so the facial recognition analysis was a precaution to identify anyone that might be potentially dangerous. Does this seem like a scene out of a movie? Or is this a type of technological Big Brother intrusion that seems at least a few years off? This may be surprising to many, but this story is not an imaginary future, it is actually the past, and describes what occurred at a Taylor Swift concert in May 2018 as reported by the New York Times. Besides sharing what was until now our secret, undercover interest in Taylor Swift, this story raises a few important concepts worth exploring. Now we don’t claim to have all the answers, but we’ll share some of our thoughts, and we invite you to consider these questions as well. First, given the potential threat of stalkers, were the actions of setting up a covert photo-taking kiosk and using facial recognition technology reasonable? And, would your opinion change if someone was caught versus if someone wasn’t caught? And should it? Second, and more broadly, should people be informed that they are being recorded and that the images are being analysed, processed and potentially being included as part of a database? At the Taylor Swift concert, the cameras were not readily visible. But the reality for most people, especially in urban locations, is that we are really under near constant surveillance already. To use another concert example, in April 2018, a man by the name Ao went to a concert of 60,000 people in China – and unbeknownst to him, during the performance of Jackie Cheung, a Cantopop superstar, all of the people within the audience were having their faces surveilled by cameras. And right in the middle of the performance, police went down the aisle and they actually apprehended Mr. Ao and took him away. It turned out that he was a wanted criminal and during the time of the concert, as he was sitting there, unbeknownst to him, they were able to realize that he was a wanted criminal and took him to jail. In another example from China, this public surveillance was highlighted by BBC reporter John Sudworth back in December 2017. Now, it is estimated that there are at least 170 million surveillance cameras all over China and the plan is to install upwards of 400 million cameras over the next few years. So Mr. Sudworth, he visited the city of Guiyang, the capital city of the Guizhou Province of China, which is actually only a few hours from us here in Hong Kong. While in Guiyang, Mr. Sudworth participated in a little exercise, where he was tasked with avoiding detection from Guiyang’s network of cameras for as long as possible. Now Guiyang is home to about 4 million people, so it’s not a small place. How long do you think he was able to avoid detection? Well… He was discovered and detained by authorities in about 7 minutes. Below this video we provided a link so you can watch a short clip of his experience to put it into context. Additional Readings 4.1.1 Public Surveillance – Privacy vs. Security
So we just wrapped up these very interesting stories and experiences about the use of public surveillance in identifying and capturing people in a variety of public settings including train stations and concert halls. So, let’s ask a more kind of fundamental, basic question then. What are the actions of setting up covert photo taking kiosks or relying on this wide-ranging and wide-scale facial recognition technology Is that reasonable? And if so, when? – Mm. – Dave, what do you think? – Yeah, it’s tricky for me, because on the one hand, I completely understand the kind of public security standpoint. But as someone who grew up in a very conservative place, I guess my immediate, initial thought is one of privacy. Right? – Okay. – So even if there’s one guy in the crowd who may pose a risk, to say Taylor Swift, or maybe even the community, there’s 59,999 other people that are not really posing any threat, and yet they are having their face scanned, information about their location, their preferences, the things that they like, being recorded, and the question, you know, I just, that, for some reason, doesn’t really resonate. It feels really weird to me. – So, I understand the privacy argument and I think it’s important. And I think people like to think at least that hey, I’m a person unto myself that should be respected. But what are the real costs for somebody in that audience who, let’s say, is not that criminal or not that threat to Taylor Swift or a criminal who’s being taken out of the concert hall by the police. At the end of the day, its sounds like their privacy is actually still being preserved, right? – But the thing is whether we recognise that or not, we are under constant surveillance. And again, you could say that they got the one guy, they got the one bad guy, but everyone else, you know, their privacy wasn’t really violated. What happens when they’re looking for someone based on ideology? What happens when it’s not a benevolent government that’s utilising that technology? What happens when it’s not a government at all, – Mm. – and it’s private actors that are utilising those technologies to somehow bifurcate society or to restrict rights’ mothers. I mean it really doesn’t require that much imagination to concoct a scenario where an individual, a large company, or even a government could utilise these types of technologies to single out people and potentially cause them very significant, personal injury. And maybe I’m old-fashioned, and I know that you have consumers that are actually choosing this on their own, either knowingly or unknowingly. They’re putting watches on children, that’s surveilled them everywhere they go. Obviously our phones, to a certain extent, are kind of watching where we go. And so maybe I’m being naive as a consumer, and maybe this is already occurring, but the idea of linking these things with facial technology or facial recognition software, geolocation and government police powers is something that’s actually quite disconcerting. Additional Readings 4.1.2 Public Surveillance – Accountability and Cultural Lag
So I think the idea of regulation is actually quite interesting and we’ve talked about it both in the context of this module, as well as previous modules but more broadly, should people be informed that they are being recorded and that their images are being analysed, processed, stored and used in other ways. Is that something that regulation should be concerned about? – Yeah, so the easy answer is yes, of course. I’m mean, I’m sure if someone’s recording you, you’re gonna wanna know it and most laws around the world, they do already have some level of notification requirement. Unless there’s say a journalistic exception within the law. But here’s the problem. So, with anything that’s ubiquitous, meaning it’s around us all the time, we become so desensitised even to warnings, that we just tend to ignore them. So think of like a streetlight or something, right? There’s so many things that are there to kind of guide us, protect us day in and day out– – I guess an example of that would be, like all these signs we see CCTV in operation – Yeah, exactly. – which we see everywhere. – Which is probably there just because of a legal requirement. – It’s a legal requirement. – To notify you. – Exactly, and so therefore, if they were to use that video recording against you or perhaps in a court of law, they would be able to say, we were authorised to do so because we met this bare level requirement. – Notification. – Exactly. If you think of the Taylor Swift example, though, very few people when they buy a ticket to go to a concert are actually gonna read through the terms and conditions of that particular event. I don’t and I’m a lawyer. I’m sure, you know, the same thing probably for you. And on a daily basis, we click I accept, I accept on so many notifications, that again the kinda ubiquity desensitises us to the fact that these are real legal notifications. So, I think we have to start thinking as society, if we’re gonna take this stuff seriously, what are not only the moral, but the legal implications in a very practical context to make sure that we’re taking these notifications seriously, and that we actually understand what rights we’re giving away. Because the reality is I think, every single day, we’re giving away pretty significant rights. – And so I think that’s really interesting. So, there’s a whole area that is somewhat regulated, and so the example would be – Right. which you just described is there’s a lot of laws talking about notification of when you’re recording somebody, be it audio, visual, whatever. But then there’s this whole other area of law that is still completely unsettled or unregulated – Yeah, yeah. Which is what we’re dealing with now in the context of AI. is, okay now that you’ve processed and analysed all this data, what legal obligation do you have if you’re the one whose processed or analysed this towards the person that you’ve actually recorded. And actually in a lot of places in the world it’s completely unsettled – Yeah. So much so that there’s actually people or companies that can use that data that they’ve analysed or processed and maybe sell on to the third parties. – Yeah. Right, and that’s purely because it is unregulated. And so that creates an interesting space to what you’re talking about is hey, if we don’t as citizens of whatever countries we’re in, or as people, as just citizens of society, if we don’t articulate the values that we want regarding privacy or security or whatever it may be, – Right. then it’ll be very difficult for us to roll back – Extremely difficult. or identify, or partition off the rights that we do wanna protect. – Right. Yeah, and this is a great example, if you remember going back to Module 1, we talked about cultural lag and the idea that it often takes time for the culture within a society to catch up to the change, very rapid change in technology, right? And, you know, thinking of within my own classroom for example, I often asked my law students raise your hand if you have a camera with you. And there’s usually kind of a few seconds of stunned silence and then immediately it dawns on them, that yes they do have a camera with them, right? – They probably have more than one. – Yeah, smartphone, right, even a smartphone alone has multiple cameras and so then again, I ask them, okay, well now raise your hand if you have two, they then realise on their laptop, on their iPad, in all these devices they actually have multiple cameras with them right there in that moment. And so, you know, if you think about that from a cultural lag perspective, these technologies change so quickly that we have them on our person at all times. Which means that we as individual citizens are also the ones that are kind of surveilling those that are around us, right. Now what do you see? You go on YouTube and you’ll see interactions of an auto-accident where normal everyday cars are filming everything that’s going on, right. You’ll see individuals getting into a fight, or an altercation, they automatically whip out their phone, right. And so it’s interesting how, again, we’re not just talking about governments here. And these technologies are expanding so that the ability, the costs, the size of the files, the stream rate all these different things are making it so that, this is really around us all the time. And again, we have to take some time to really evaluate from a cultural perspective how we expect these things to evolve, because if we don’t, then the companies through various forms of capitalism are gonna make those decisions for us. – Yeah. And those lessons are broadly relevant to artificial intelligence, but also, specifically relevant for the issues that we’ll face FinTech and financial technologies. – Absolutely. Yeah. Additional Readings 4.2.1 What Is Artificial Intelligence (AI)?
Hopefully, you are still with us and not scrolling through Taylor Swift music videos. Because what we’re going to explore in the rest of this module is interesting and important… and Taylor Swift will still be there after we’re done, we promise. Initially, the story about Taylor Swift’s concert or the BBC reporter’s experience in China do not seem to be related to FinTech, right? So where is the connection? Well the advances in surveillance we’ve shared with you, are not just about more and better cameras, but really about the facial recognition and identity analysis software that is growing more efficient due to advances in artificial intelligence (“AI”) and other technologies that fall under the broad umbrella of AI, like machine learning. Now if that phrase is vague to you right now, don’t worry, we’re going to get to that soon. Now people have been working on facial recognition software and forms of AI for a while. In fact, a trio of early technologists, Charles Bisson, Woody Bledsoe, and Helen Chan, researched how computers could be used for facial recognition as early as the 1960s. So today’s “hot” concepts did not just pop up, but because of the increases in computing processing power, the potential of AI is starting to be realized, which has propelled AI into the public discourse, and rightfully so. So what that means is for those of us participating in this course, you and me, in our lifetimes, many of the big leaps in FinTech will be enabled because computing power that has resulted in more mature, developed AI. Thus, a major theme of the still developing FinTech story is about the increasing influence and applicability of first, machine learning, and more broadly, artificial intelligence. And this is what we want to explore in this module. So to help us get started, let’s consider a few terms, some buzzwords, so that we have the right vocabulary for our discussion. Now keep in mind, the definitions of many of these terms are not uniformly consistent yet, and even experts themselves may have slightly different approaches or views, but we went with a few definitions that we think are not just comprehensive but also comprehensible even if you’re not a technology expert. So what is artificial intelligence or AI? AI is really an umbrella term that encompasses a number of technologies, but before jumping into that, let’s start with some history. Alan Turing, the pioneering English computer scientist and mathematician, and at least one of the grandfathers of AI, first started considering AI concepts even before 1950. His eponymous Turing Test, which moved beyond the question of “Can machines think?” to the more nuanced question of “Can a machine imitate a human” is interesting. And basically, if a computer and a person were answering questions that you asked, but you didn’t know which answers were given by the human or the computer, would you be able to identify the computer from its answers alone, or could the computer trick you into thinking it was a person? And John McCarthy, long-time Stanford professor and one of the fathers of AI, who is widely credited with coining the term “artificial intelligence” expanded further. To “Uncle John” as he was referred to among many of his students, AI is the “science and engineering of making intelligent machines.” But what then is intelligence? Stephen Hawking is widely attributed with saying, “Intelligence is the ability to adapt to change.” And so the increasing capacity of machines to learn and react as new data is presented represents this process of adapting that is at the core of Hawking’s view of intelligence. Increases in computing power coupled with the creation, collection, and analysis of an ever-growing amount of data will continue to enhance the capability of artificial intelligence. Additional Readings West, D. M. (2018). What is Artificial Intelligence? Brookings Institution . Retrieved from https://www.brookings.edu/research/what-is-artificial-intelligence/ Turing, A. M. (1950).Computing Machinery And Intelligence. Mind , 59(236), 433–460. Retrieved from https://doi.org/10.1093/mind/LIX.236.433 Sharkey, N. (2012). Alan Turing: The Experiment that Shaped Artificial Intelligence. BBC News . Retrieved from https://www.bbc.com/news/technology-18475646 Torres, B. G. (2016). The True Father of Artificial Intelligence. Open Mind. Retrieved from https://www.bbvaopenmind.com/en/technology/artifficial-intelligence/the-true-father-of-artificial-intelligence/ Cameron, E. and Unger, D., (2018). Understanding the Potential of Artifical Intelligence. Strategy+Business. Retrieved from https://www.strategy-business.com/article/Understanding-the-Potential-of-Artificial-Intelligence?gko=c3fb6 Brundage, M., et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Future of Humanity Institute . Retrieved from https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf 4.2.2 What Is Machine Learning?
Now that we’ve touched on AI, let’s move on to machine learning. Machine learning really is a subset of AI, and often when people refer to AI, they are usually talking about machine learning, especially when it comes to FinTech. For example, advances in algorithmic trading are being powered by machine learning. Additionally, the ability of financial institutions to manage risk, detect fraud, and even optimize operational processes are all being made more efficient and accurate through machine learning. Even lawyers like us, correction—former lawyers, who maybe thought we were immune from technological change are being impacted as machine learning technology is already being implemented to review documents, like contracts or loan agreements, much faster, cheaper, and even more accurately than a human could. Sounds exciting right? So let’s jump into it. What is machine learning? Now machine learning is effectively a machine, say a computer, combing through and analyzing with statistics, large amounts of data to find patterns. Now that data could be in the form of text, like in a loan document, or it could be a series of numbers, like stock prices, or a whole host of other types of information. Now based on that data, the machine, can start making predictions, as more data comes in, the predictions become more refined. Now most of us interact with machine learning almost on a daily basis — basically whenever we enjoy any kind of service that recommends things to us, you know, like the new show that Netflix is going to recommend to you tonight. Lastly, machine learning can be further specified as supervised learning, where the data is labelled or identified; unsupervised learning, where there are no such identifying markers; or reinforcement learning, which is what Google’s AlphaGo represents, and based on the machine figuring out things after exploring multiple permutations of outcomes —so basically there’s like a massive iteration process of trial and error. Additional Readings 4.2.3 What Is Deep Learning?
Since we’ve mentioned machine learning, it’s important to briefly touch on something called, deep learning. Now we won’t spend much time on deep learning here, but because many advances in FinTech will be built on deep learning moving forward, it’s worth explaining, even for just a few seconds. Deep learning is basically an enhanced form of machine learning that uses algorithms that emulate the neural network of the brain —basically how our brains learn— to help the algorithm learn through a progression of layers that get “deeper” and “deeper” as more data is incorporated. So like machine learning, deep learning can be supervised, unsupervised, or reinforcement-based. If you weren’t familiar with those terms before, hopefully they make sense a little more sense now, and hopefully you also better understand the relationship amongst AI, Machine Learning, and Deep Learning. And we also hope that you’ve noticed that these forms of AI all rely on massive amounts of data, which is why our discussion of data at the beginning of this course is so important. Data truly is the fuel that will power AI-backed FinTech innovation. Additional Readings 4.3.1 AI and the Trolley Problem
We’ve just discussed AI and data. Now, let’s think about that in the real world. By returning to something we discussed in module 1, where we introduced the trolley problem. If you recall, when we discussed the trolley problem we talked about two scenarios where a runaway trolley was about to hit a group of people. In one of the scenarios you had the choice to divert the trolley with a switch which would change the trolley’s direction and hit only one person who is killed by the impact. In the second scenario, instead of a switch however, you would have to push a person in front of the trolley to stop it – thus saving the group of five – but killing the person you pushed. Nearly everybody chooses to divert the trolley with the switch, and nearly all object to pushing a person into its path. Now this dichotomy highlights the important aspect of proximity in people’s decision-making, such as how proximate or close we are to a given context, or how personal it feels can alter our decisions completely. In recent years, the trolley problem has morphed into other dilemmas that have become popular in the news and in the media. This is especially true for AI and self-driving cars. With autonomous vehicles on the horizon, self-driving cars have to handle choices about accidents – like causing a small accident to prevent a larger one So, this time, for our hypothetical scenario, instead of a runaway trolley, think of a self-driving car, and instead of a switch to redirect the car, the “switch” is the self-driving car’s “programming”. So for example, imagine a self-driving car driving at high speeds, with two passengers, where suddenly three pedestrians enter into the crosswalk in front of the car. The car has no chance of stopping. Should the car hit the three pedestrians, who will likely get killed? Or crash into a concrete barrier which would lead to the two passengers likely dying? Now imagine you are the passenger of the car, what would your answer be then? And what car would you ultimately buy? A car that saves you, the passenger, at all cost in any scenario, or one that minimizes harm to all – but which ultimately may affect you? If there was no self-driving vehicle, and you were the driver, whatever happened would be understood as maybe a reaction, a panicked decision and definitely not something deliberate. However, in the case of a self-driving vehicle, if a programmer has developed a software, so the vehicle will make a certain type of decision depending on the context, then in an accident where people are harmed, is the programmer responsible for that? Is the car manufacturer responsible for that? Or, who is responsible? Is there even an answer to what a self-driving car should do? Now researchers at MIT, the Massachusetts Institute of Technology, further revived this moral quandary back in 2014. They created a website they called the Moral Machine, and through that website, respondents around the world were asked to decide in various self-driving vehicle scenarios, such as whether to kill an old man or an old woman, an old woman or a young girl, the car passenger or pedestrians, and many other similar questions. Since its launch the experiment has generated millions of decisions, and analysis of the data was presented in a paper in the scientific journal Nature in 2018. The study sparked a lot of debate about ethics in technology, which is the purpose of this course. So given that, we’d like to ask you a few questions. One, who should you trust? Should we trust AI? Or should we trust humans? Two, who’s responsible if something bad happens? So in the context of an autonomous driven vehicle, is the car manufacturer responsible? Is the software programmer responsible? Or another stakeholder? And third, culture. What is the role of culture in all of this? Let’s consider these questions together. Notes on AI and the Trolley Problem:
It is important to note that the trolley problem is fundamentally about showing how we process information and to highlight blind spots in our decision-making. Doing so hopefully helps us improve our choices by demonstrating the need of our morality and our sense of responsibility to humanity in our decision-making. And to the extent we think that morality, emotion, and humanity are important and worth developing, you could say that by linking AI and driverless cars to the trolley problem, we may be doing the opposite of what was intended and missing the point altogether, possibly to our mutual disadvantage. We should be wary that we are making the whole conversation less proximate.
Additional Readings Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J. F., & Rahwan, I. (2018). The Moral Machine Experiment. Nature , 563, 59–64. Retrieved from https://www.nature.com/articles/s41586-018-0637-6 (paywall) Huang, E. (2018). The East and West Have Very Different Ideas On Who To Save In A Self-Driving Car Accident. Quartz . Retrieved from https://qz.com/1447109/how-east-and-west-differ-on-whom-a-self-driving-car-should-save/ Hao, K. (2019). Giving Algorithms a Sense of Uncertainty Could Make Them More Ethical. MIT Technology Review . Retrieved from https://www.technologyreview.com/s/612764/giving-algorithms-a-sense-of-uncertainty-could-make-them-more-ethical/ 4.3.2 AI and the Trolley Problem: Trust and Proximity
So, let’s talk about trust. Now, Dave, you’ve been a passenger in a car that I’ve driven before. – I have. – So, who do you trust more, me or the autonomous-driven vehicle. Well, as much as– – Tough question, I know. – No, so as good of a driver as you are, the reality is, well I don’t know. I guess this is the thing that is so disconcerting for a lot of people. I think when I’m in your car, I trust you. If I was the one that was driving, I would certainly trust myself, right? But I think for a lot of people, we just have a question of this completely autonomous, non-human actor, and not singular, either, like potentially thousands of these non-human actors that are gonna be out there with these large vehicles roaming around. The reality is, I think, that I would probably want to trust you more because you’re my friend, and I know you, but I think empirically, I believe that it is probably a lot safer with a host of autonomous vehicles that are out there. – Yeah, I think you’re right. I think a lot of the research as we have it now demonstrates and leads to the fact that overall, things will probably be safer as more and more autonomous vehicles are on the road. But why do so many people, or why do you think so many people are so resistant to that? – Well, I mean, I think I’m going to flip it around and ask you, right, ’cause this trust is a big part of what we’re talking about, and the fundamental, I guess, foundation for so much of this ethics conversation. And, do we over-trust ourselves? Do we under-trust technology, or is it the other way around? Like, are we rushing so quickly into these technologies without really understanding whether or not we should be trusting them. – Yes, so I think two things off the top of my head. One is, you know, as humans, we tend not to trust things that we don’t understand. – Right. – Right. And so, I think that plays a lot into it of hey, I don’t understand how this exactly works, so I’m gonna distance myself from this, or I’m gonna be suspicious of it until I do understand how it works. I think there’s a lot of that. Two, I think this idea of over-confidence, right. As again, as humans, we tend to be more over-confident of our own ability than we probably should be. There’s been tons of tests that have been done where that’s been shown, right? And I think the combination of those two things of hey, I’m actually not that bad of a driver anyway, so it should be okay, plus I don’t understand what’s going on in this car with no driver. Those two things kind of collide, I think, amongst all, amongst humanity or mankind to kind of create this situation where hey, maybe I’m resistant to this change. – Yeah, so from the perspective of trust, I think you kind of hit on what we discussed earlier from a cultural-lag perspective, right? For a lot of people, they’re gonna be very comfortable kind of continuing in that perceived safe method of travel, when in actual fact, the numbers maybe don’t bear that out, and they’d be willing to persist with a situation where they’re the driver rather than going into a potentially safer autonomous vehicle. And I think this gets to another interesting point that is often a criticism of the trolley problem in the first place is that it presents this binary, almost illogical situation where you have to choose between one person dying or five people dying or some really fantastic situation when that’s probably not the case at all. – Hey, it’s not reflective of real life. – Sure, they’re not reflective of real life, and so, I guess my question for you, from an autonomous-driving-vehicle perspective and AI perspective, I think one of the real reasons why people in this industry are saying you should trust autonomous vehicles more is because they can communicate with each other kind of seamlessly and simultaneously. What’s your perspective on that? I mean is that kind of how it would work, and how would that potentially make things better, safer, smoother? – So I think that’s a really cool question for at least, again, two reasons off the top of my head. I think historically when you look at the most earliest versions of this kind of autonomous driving, there’s an idea that actually these vehicles would not be independent. They would somehow be in sync with each other to make driving much more efficient, so I think the more advanced forms of this autonomous driving will be exactly what you are talking about, these kind of a linked network of vehicles that collectively will be able to gauge risk, and overall, holistically make things maybe more safer. So I think there’s definitely that component that exists. I think the second thing that ties into what you’re talking about with respect to this binary kind of, this false dichotomy, right? – Yep. – It’s like a binary code. It’s either zero or there’s a one. – Yeah, there’s lots of information, it’s not just either or. – Exactly, and if you talk to people who operate in this space, either an auto manufacturer who’s trying to go into autonomous vehicles, or on the software side, people who are developing the software, you know, almost uniformly, they will tell you that it’s never binary. – Yeah. – It’s always multiple, different outcomes and things that can happen, and you know, kind of based on what we discussed just a few minutes before about AI and machine learning and deep learning, and this idea that these systems will go through multiple permutations on all, based on the data that’s being inputted, and historical data as well as data that’s coming in live. Then, they look at what the different outcomes will be, and so what that tells us is that there will probably be multiple outcomes anyway, which is more reflective of reality, which gets to the criticism that a lot of people have about trolley. Additional Readings Sage, A., Bellon, T., & Carey, N. (2018). Self-driving car industry confronts trust issues after Uber crash. Reuters . Retrieved from https://www.reuters.com/article/us-autos-selfdriving-uber-trust/self-driving-car-industry-confronts-trust-issues-after-uber-crash-idUSKBN1GY15F Kaur, K., & Rampersad, G. (2018). Trust in Driverless Cars: Investigating Key Factors Influencing the Adoption of Driverless Cars. Journal of Engineering and Technology Management , 48, 87-96. Retrieved from https://doi.org/10.1016/j.jengtecman.2018.04.006 Verger, R. (2019). What will it take for humans to trust self-driving cars? Popular Science . Retrieved from https://www.popsci.com/humans-trust-self-driving-cars Baram, M. (2018). Why the Trolley Dilemma for Safe Self-Driving Cars is Flawed. FastCompany. Retrieved from https://www.fastcompany.com/90308968/why-the-trolley-dilemma-is-a-terrible-model-for-trying-to-make-self-driving-cars-safer 4.3.3 AI and the Trolley Problem: Cultural Lag
I’m curious to hear what you think in terms of a lot of, again thinking of cultural lag and thinking of how we could implement these things. There are certain jurisdictions that are way out front in terms of trying to establish the physical landscape that would allow these systems to take place. So, one of the more famous ones would be certain parts of Arizona for example in the United States, right? And they’re trying to make sure the physical infrastructure is there to kind of speed through that cultural lag but it is also interesting the juxtaposition of a lot of anytime an autonomous vehicle hits someone, or hits anything, it’s global news, right? So what is that juxtaposition and why is it news just because there’s an accident? – One, I think the nature of media now is that people want to see headlines, right? And so I think there’s something exotic still about, “Hey, AI, even though or autonomous driven vehicle, even though statistically you’re probably still safer, even now, than the average driver.” – Yeah, ’cause they’ll be quick to point out, these cars have been driving around for thousands of hours– – Miles, right. – Yeah, thousands of miles. – Kilometres, yeah. – Yeah, exactly, yeah, yeah. – And we don’t report on every accident that happens. – Right, or non-accident. – Or non-accident that happens. Yet if a vehicle that’s been driven so long, but has one accident just because it’s autonomously driven, now it’s an issue. So, I think there’s a bit of a media frenzy around autonomous driven vehicles, partly because it’s a bit sexy right now and partly because people are not sure what it is and what’s gonna happen. I do think the Arizona example is interesting because definitely there are pockets of geographies in different places in the world so in the United States, you mentioned Arizona but if you go to Silicon Valley now, you see Google Waymo vans everywhere, right? – Yeah, on the Google campus, yeah, yeah, yeah. – And so, there’s a bit of that. I think outside of the United States, one place that’s really interesting or been at the forefront of this is Japan, which has instituted at the national level, series of legislation to allow autonomous driven vehicles and even trucks in the next few years. – Yeah. And so they’re quickly trying to build up the technological infrastructure as well as the physical infrastructure to allow these kind of vehicles to operate more effectively and efficiently. – Yeah. – And so I think that’s an important piece and I think once you have kind of national and local leaders behind it then the regulatory landscape will change pretty rapidly around that and once that happens and insurance will change, kind of ideas of liability would change and those kind of process will start developing. – So let’s talk about that for a minute. So again, for all of you out there. Just think for a moment. Let’s say that, talking about, again, the changing technology and then how culture and therefore laws and things have to change and catch up to it. We’re talking about regulation. Who would be responsible if there was an accident? So think about that for a moment. Let’s say you’re walking down the street. You’re crossing the street and all the sudden, an autonomous Uber or delivery vehicle. Maybe a Google bus or something cuts you off and ends up knocking you down causing some injury. Think about it for a minute. Like who would or should be responsible for that? – Well, maybe what you said as a starting point to think about now if it was just a normally driven vehicle without the autonomous power software driving the vehicle. Who would be responsible? And we will kind of go through very typical kind of legal analysis. Insurance people would be involved. Police officer will show up and do a police report and there would probably attribute some negligence to the driver or to maybe if you were jaywalking. – Yeah, different people. – And there would be different people. I think that would be the initial starting point for similar just because a vehicle is involved in an accident that has no driver. – Yeah. It doesn’t change the entire dynamic. – It doesn’t change that entire dynamic. Exactly. Now, I guess that goes to a more fundamental question though about, let’s say there’s inherently wrong with the software or with the vehicle itself, the autonomous driven vehicle. Then who would be responsible? Would it be the software programmer? The developer who created the AI software or his company? Or would it be the car manufacturer that actually owns or manufacture the vehicle? Or would it be the owner of the vehicle who’s not even driving. – Yeah. – But they actually own maybe a fleet of these vehicles. – It does make it difficult though. I mean although it doesn’t change things entirely, there is one big missing component there. It’s the driver. Right, so currently, under tort law, almost everywhere in the world, if a car strikes someone then the driver is almost universally gonna be responsible so it certainly does limit the number of people that could potentially be responsible. – Yeah, so I think you’re right. Overwhelmingly, if a driver has an accident with the pedestrian, almost in most situations, the driver is gonna be held responsible for that. I think the proxy for that moving forward, autonomous vehicles would be who owns the vehicle? Now, the thing that would be really interesting I think is the next iteration. Now, there’s a next, the next version as it advances, the idea of owning a vehicle is vastly different from before. – Yeah, yeah. – It maybe owned collectively by a neighbourhood or by– – Or it could be a utility like electricity. – Exactly or it could be utility or just by a company like that has a fleet of taxis but similarly, we’ll just have a few. And so depending on how these assets then are owned, the idea of ownership will also become very interesting and how you hold those people accountable. – That is why I hold a little bit of concern in this regard because typically, the bigger the actor is, the more challenging it is as an individual, someone who’s injured, the more challenging it becomes for you to seek, redress and to recover any type of damages from that. So for example, if it’s you versus Uber, that’s a significantly– – Power dynamic is very skewed. – Very different power dynamic than if it was me versus you, let’s say, right? Additional Readings 4.3.4 AI and the Trolley Problem: Cultural Differences and Biases
Okay, so we’ve covered trust and responsibility and how the challenge of getting these things going from a cultural lag perspective. But I thought one of the most interesting things to come out of this, especially from the MIT study in particular, was the way various elements of culture and perhaps bias, kinda came out and the potential programming implications from an AI perspective. Can you talk about that for a minute? – Yeah, so I think that’s what got picked up by by media the most. – Everybody was talking about – Everybody was talking about the cultural implications of what this Moral Machine, the data that came out of these basically surveys that people were doing. Effectively how different cultures, or at least the way it was painted, was how different cultures prioritise life in a sense. If that life was in a car with you, it could be more valuable in a certain cultural context than the life that’s outside the car, that potentially you’re hitting. And so how do you try to protect one life over the other? And so, you know, that’s a simplistic explanation but you know, there was a lot of very interesting takeaways that people had. They found that Chinese respondents were more likely to choose hitting pedestrians on the street, instead of putting car’s passengers in danger. And we’re more likely to spare the old over the young. Western countries or people from western countries tended to prefer inaction, letting the car continue its path. So, kind of like inertia. While Latin Americans preferred to save the young. – Okay, so I wasn’t that surprised when I saw the results from the MIT study and it showed that Asians for example, were more likely to preserve the life of elderly at the expense of the young, as an example. I think, having been in Asia for the past 20+ years, many cultures here have a reverence for the elderly. – Deference – Deference at least, yeah. And so, I think there were certain things like that, that maybe weren’t that surprised, and fit certain cultural stereotypes that have, I think been around for a long time. But I guess that the bigger question is not that these cultural preferences existed, but then what should we do with them as a result. Especially when programming, FinTech, and things in the future, right. I think, you and I both know that from one of the challenges that lawmakers, or ethicists like us, or companies as they’re trying to create a moral code for their employees. One of the challenges that they have is trying to create a moral code that permeates culture and goes across country lines, right. So, is it possible, or should it even be a goal from an AI and technology standpoint for us to create a uniform sense of morality? – Yeah, that’s a really important question to be honest. So I think if we take one step before we get even to the technology. I think the example you gave of, let’s say you have a Western company that does business all over the world – Yeah. Asia, Africa, the Middle East– – They write a code of conduct in California. – Yeah– – So then they have to apply it everywhere. – And now they want to make it universal. – Yeah, yeah. – But potentially what is right in their initial cultural context may actually be questionable in a different cultural context. – Or, still perhaps “right” from a legal or moral perceptive, but communicated in a way that doesn’t resonate with local people. – Sure. And so, there are a lot of implementation challenges to say the least, when companies try to embark on this kind of initiative. So, if we transport that into AI and just technology in general you know, what comes to mind here is, automobiles or the automobile industry is a global industry, right. We have car manufacturers in China, in Japan, in Korea, in the United States, and a whole host of other places. And so, the programmer who sits somewhere in Asia with a certain cultural context that is programming a particular type of AI into a vehicle. With some of the results potentially from the MIT study and let’s say that vehicle is then imported or shipped to the United States, and that particular cultural context bleeds into how that vehicle operates in a different cultural context. – Right. – And then how does that vehicle have with its particular culture, influence, on the road with other vehicles that have a different cultural influence. How do those all interact? I think that’s a really fascinating and important question. It’s a microcosm of a greater host of challenges that AI will bring to the forefront the type of things that we need to discuss as a society. – Yeah, and so, again, foreshadowing a little bit, but also, revisiting the very first module. Really, while these are interesting practical challenges that all of us have to consider as we enter into this kind of new wave of the Fourth Industrial Revolution. We haven’t even touched upon the most critical of these issues. The idea that one of the most common forms of work globally would be drivers. – Drivers. Certainly within the US and other places within China, et cetera. There are so many millions, and millions, and millions of drivers around the world, and so this kinda leads into more fundamental, systemic, social questions of if we remove these from the equation how do we then reintegrate them into the workforce. How do we ensure that society is able to absorb those people, provide them not only jobs, but a sense of well-being. And that’s something that we’re gonna be considering in the next few modules. Additional Readings 4.4.1 Data and Models
Since data is so critical to AI as well as to many of the other technologies that underpin FinTech, it is important that not only the right data is being used but also ensuring such data is not biased. The phrase “garbage in, garbage out” has probably never been more apt, nor as important, than when describing AI. And bias can find its way into AI in a few ways. Let’s take a simple example. If a computer’s model is using data that is already contaminated by some level of discrimination then the output will also inevitably be prejudiced. So say for instance, your AI relies on data from apartheid-era South Africa, well, chances are that data incorporates the wide-spread racist policies that existed at that time. Obviously, this would lead to less than ideal outcomes. And even assuming your data is free of such explicit bias, there are other ways for bias to possibly creep in to artificial intelligence. For example, cultural bias and norms can inadvertently be programmed into AI because a programmer from one culture might value some characteristic differently than a programmer from another part of the world. We’ll explore this a bit further when we revisit the trolley problem. There are other potential issues that also relate to bias. AI is driven by algorithms and models. In her thought-provoking book, Weapons of Math Destruction, or what Harvard-trained mathematician, Cathy O’Neil refers to as “WMDs”, she identifies three characteristics of a possibly dangerous model. So the first characteristic of a dangerous model, is that the model is opaque and not transparent. So this would be if the system is what we call, a blackbox, And it’s difficult for those from the outside to be able to really understand what is going on behind the scenes. The second is that, the model is potentially scalable and can be used broadly or across large populations. Now of course, this has been a key component to what we’ve talked about thus far. The issue with a lot of these AI and other forms of technology is that they can scale beyond anything that we’ve seen before. And the third aspect is that, the model would potentially be unfair in a way that would negatively impact or even destroy people’s lives. So for example, if AI was being used from a FinTech context, to determine who could get a mortgage to purchase a home, who has access to credit, etc., etc., These would be things that could have a significant negative impact if someone was not granted access to them. So despite all the good that will certainly accompany the rise of AI, it’s also pretty clear that biased data in conjunction with possibly suspect models have the potential to create more risk, unfairness, and inequality, which is why it’s important to be aware of their impact and invest time thinking about how to prevent such problems, now, before the technology is fully mature and really permeates our lives. So in the next few cases, we’re going to look at some of these warning signs in real life scenarios. Additional Readings 4.4.2 Mortgage Application
In the last section we explored AI, particularly in relation to autonomous vehicles, and considered really important topics around trust, accountability and the impact of culture. Next, we will look into AI bias, specifically in the context of assisting human decision-making. Often, when we think of AI or algorithms, we think of something impartial and neutral, something that is simply acting based on pure facts. And this is one of the reasons why we humans have began using AI to help us with more subjective evaluations and decisions. If we can remove human error from decision-making, that would lead to a more just and better world, right? But the reality tends to be that algorithms are not as neutral as many have come to hope. This is because of bias that gets programmed in, because of cultural bias from the programmer, or from historical bias, that is somehow prejudiced in a certain way. Google AI chief, John Giannandrea, has said that his main concern regarding AI does not revolve around killer AI robots or Terminator sorts of things, but instead he is worried about the biases that he says, “that may be hidden inside algorithms used to make millions of decisions every minute”. So, first of all, what do we actually mean when we say that AI or an algorithm is biased? If you recall our talk about machine learning, a vital part of that revolves around the training of AI. Training to see and follow patterns by feeding it large amounts of information and data, training it to understand what success looks like, fine-tuning the results and reiterating, and so forth. And in this process there is the possibility of human errors and prejudice integrating itself into the algorithm, Let’s take a look at another example. In the past, if you were about to buy a home, you would typically meet in person with a mortgage officer at your local bank, probably. You would visit their workplace, have a chat, provide any relevant documentation, this person would then review your documentation and later they give you a result of whether the bank was going to lend you money or not. For the lending officer, this would typically be a fairly subjective exercise. Because the majority of home loan applicants fall in some level of a grey area where there’s no definitive “yes or no” with respect to loans so they have some discretion. So, with the recent advent of more advanced algorithms and to increase efficiency, this process has been simplified for many banks, where the decision-making is now, to some extent, outsourced to AI, which makes the recommended loan application decision. By doing so, this process should be more accurate, objective and fair, right? Well, not always, Amongst many studies that have been done, in particular, a recent study by the University of California found strong bias and discrimination by these “AI lenders”, such as charging 11 to 17 percent higher interest rates to African American and Latino borrowers. Additionally, minority applicants are more likely to be rejected than white applicants with a similar credit profile. Now, lending discrimination is not something new, and has been reported on a lot in the past. So Washington Post, for another US example, uncovered widespread lending discrimination back in 1993, where they showed how various home lending policies were negatively impacting minority residents. What further complicates the problem around AI bias, is what we people refer to as blackbox algorithms. This is something similar to what we discussed earlier about opaque models, lacking transparency. And really, private companies are generally hesitant to open the door for other people to scrutinize what they’ve been doing. So how do we make an inclusive algorithm, when the data, its developers and the organizations who hire them are seemingly not diverse and inclusive? Overall, while algorithms are helpful, they may not make things as fair as we ideally would have hoped for. And we therefore have to be careful in blindly applying them – especially since they have a tendency to repeat past practices, repeat patterns, and automate the status quo. Additional Readings Knight. W. (2017). Forget Killer Robots – Bias Is the Real AI Danger. MIT Technology Review . Retrieved from https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/ West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute . Retrieved from https://ainowinstitute.org/discriminatingsystems.html Brenner, J. G., & Spayd, L. (1993). A Pattern of Bias in Mortgage Loans. The Washington Post . Retrieved from https://www.washingtonpost.com/archive/politics/1993/06/06/a-pattern-of-bias-in-mortgage-loans/d04bcb29-d97b-44b5-b4e0-93db269f8f84/ Counts, L. (2018). Minority Homebuyers Face Widespread Statistical Lending Discrimination, Study Finds. Berkeley Haas . Retrieved from http://newsroom.haas.berkeley.edu/minority-homebuyers-face-widespread-statistical-lending-discrimination-study-finds/ Hao, K. (2018). Can you make an AI that isn’t ableist? MIT Technology Review . Retrieved from https://www.technologyreview.com/s/612489/can-you-make-an-ai-that-isnt-ableist/ 4.4.3 Mortgage Application – Trust
So let’s think about a question. Imagine that you went to a bank and you applied for a financial product, like a loan for a home. And you submitted all the paperwork, it was processed by the bank and a few days later you were rejected. And you went back to the loan officer and asked why. And they said hey, our AI decision-making software screened and scanned your application and said unfortunately, no. What would you do? Dave, what would you do? – It’s tricky, right? I mean, it’s already hard enough to communicate with banks as it is and now they’re moving it into this completely amoral space where, essentially this software is gonna be making a decision. And I mean, I’m not really sure you would have a recourse, would you? Like, they’re not gonna give you access to the algorithm, they’re not gonna show you probably exactly why and it just seems like it would be one step further away from, kind of, a balanced negotiation between you and the service provider, right? – Yeah and so I think, I think that’s a good point and the idea of not having recourse is really key. Because I think it raises really fundamental questions about what’s fair, right, because if that discrimination, well, if let’s say your rejection by the bank was based on some level of latent discrimination based on biassed data or other forms of bias that may exist in that AI process then there’s some issues of fairness if you can’t go rectify that. – Yeah, well why is it so critical that banks, or really even we more broadly, think of these questions now? Especially like something like bias. I mean isn’t that something that should come out later on? – I think what we’re finding already is that the longer we wait the more difficult it will be to implement, kind of, cleaner AI that has, you know, more cleaner or filtered data. And partly because data compounds. You know, there is troves of data being produced every day and if we’re not aware of the influence of how that’s compounding and the negative inputs that are already there then that potentially becomes a problem. Particularly since a lot of that data’s already based on historical data we know incorporates bias that existed because, you know, societal norms were different in the 1960s or 1970s versus what they are today. – Well can’t they just clean it up? Like can’t we just make it neutral somehow? – Well I think in certain cases you could perhaps be able to do that. But in a lot of the situations what that would do is, in the process of cleaning that up, perhaps other factors of the data, that you may need to rely on, also get influenced. So that data’s also not clear anymore. And so this then become a little bit of a catch-22. Fixing one problem creates another problem. – Yeah, okay, so where do you fall on this line? So let’s say we know that humans are very imperfect and we know that most of the bias that we have in the data is there because humans are biased and they will discriminate for race or gender or nationality and for a whole host of other reasons. And so, on the one hand, we clearly do not have a perfect track record. But on the flipside we are now entering into, potentially entering into this area of complete amorality that’s going to be built on the back of existing historical data and could introduce a whole new set of bias, or even worse, entrench existing bias into these decision-making processes. So what would you trust more? Do you trust the kind of human bias that’s inherent in the existing systems or do you trust the potential bias in these data sets and AI that’s gonna be coming, you know, in the next few decades? Additional Readings 4.4.4 Mortgage Application – Accountability
Well, I don’t know, Dave, that’s a difficult question. I think maybe if we go back to just a basic framework that we talked about earlier, about models that potentially are dangerous. And we talked about is the model opaque and not transparent? Does it have the possibility to scale and potentially be used by large amounts of people? And is the harm or potential damage that the model causes pretty substantive or substantial? And I think in the example of a home loan, that definitely applies to all three. Banks definitely will not open up their AI model, or decision-making model, to tell you how– – Not willingly. – Not willingly, how they made a decision. So, that would be quite rare. Secondly, the scalability of this is quite large. So, you can imagine some of the largest banks in the world with thousands or hundreds of thousands of clients, the impact or the scalability would be quite large. And then lastly, for each of those individual customers, the impact on their life could be huge. – Huge, yeah. – The difference between having a home and not having a home, what could be more fundamental to a person’s well-being, or idea of psychological kind of stability, than the opportunity, when they’re ready, to purchase a home? So, these are really fundamental things that I think we have to think about. Now, going back to the idea of recourse and the balance of do we trust the human, even though we have bias? Or do we trust the AI, even though that also has some level of bias? I think it’s a bit of a mix and I think we need both. I don’t think we can completely do away with the human element. – Right. – And rely completely on the AI, but we can’t go the other way, overboard the other way either. And I think in our seeking of efficiency of how we use AI, one of the ways we, one of the big draws is that it will hopefully make us more efficient, right? Some of the repetitive tasks and the things that take up a lot of our time, maybe we won’t have to do those anymore. But in our pursuit of this greater efficiency, I do think we still need to sacrifice a little of efficiency to keep a human element there. So, when we go back to the example, do you have any recourse? Well, what would be great is, if banks continue to have somebody there that a rejected applicant can go to and say, hey, I got rejected. I just wanna understand why. And then you could have somebody there to explain the process and potentially follow up to see if things were interpreted incorrectly. I think that would be the best of both worlds. Now, of course not a lot of organisations may be willing to do that, but I think there will be organisations that will be willing to do that. Particularly as we have these debates about how to balance this responsibility and this trust between the different stakeholders involved. – Yeah, and you say that banks or financial institutions wouldn’t willingly allow people to see their algorithm or other kind of inside data. I completely agree that they wouldn’t willingly do that. But I wonder if transparency really is the key to ensuring these things. And they do say sunlight is the best cure from the ethics perspective. I wonder if that is the eventual future of this. If we’re going to rely so heavily on these products, if we’re going to rely so heavily from a financial, entire industry perspective, I wonder if the eventual step would be, just like a patent, where you’re granted a patent but in return, you have to provide very public data on the creation and various components of that particular device. I wonder if that means that eventually, if you hit these three things, if it’s not transparent, if it’s really scalable, and if the potential for harm is very significant, I wonder if there would be either a public or even kind of private governmental disclosure required to show that there isn’t bias within the system? – Yeah, and I think that’s a really interesting point. And from a broader level, some people are talking about this in the context of large technology companies that have grown so much and become such a part of our lives. And maybe those companies shouldn’t just be considered just a normal company. – Right. – Because they’re so influential, but maybe we should regulate them like a financial company or even a public utility. – Even a utility. – That’s right. – And so, I think that’s part of the broader debate we’re having as a society to understand how we want to manage the influence, the increasing influence of these companies in our lives. – Okay, so that’s something to think about. From the standpoint of AI and artificial intelligence as these things become more and more ubiquitous and utilised around us all the time, is to think about how are they making life more transparent, more efficient and more unbiased? Or are they actually entrenching existing biases and therefore kind of further distancing certain segments of society from the financial markets and from financial inclusion? Additional Readings 4.5.1 Social Credit
Let’s look at another example and talk a little about credit, a pillar of the modern financial system. For many people in the world, credit is part of everyday life, ranging from credit cards to borrowing money from a bank to buy a home. For many, the ability to access and use credit is largely defined by a credit score, which ultimately gauges how likely a particular person is to repay the money they have borrowed. Conceptually, the better the score, the lower the risk. So in the United States, we have something called a FICO score, named after its creators, Bill Fair and Earl Isaac, who created the Fair Isaac Corporation, which initially produced these scores. To calculate a FICO score, different financial data such as, bank account information, existing debt levels, payment history, and other related information are used together to calculate a credit score. Many other countries now also have their own versions of these scores. In theory, the use of these scores is important because individuals can more freely access capital and other financial products since banks and financial institutions are more willing to lend money and likely at lower interest rates because they have this credit information. So a mature credit system makes accessing capital easier. And for many in Hong Kong, or the UK, or other countries with developed financial systems, the notion and use of credit is quite mature, a given, really, almost an afterthought. But what if there was no credit score for a financial institution or bank to assess your risk when you needed to borrow money? How might that impact you? Well, that bank may require you to pay a really high interest rate or pledge a lot of collateral, even for a small loan, or they might even require both. It was due to such challenges that microfinance lending organizations, like the Grameen Bank, founded by Muhammad Yunus, were formed. Now the issue of credit really becomes apparent when you consider there are approximately 2 billion people in the world that are unbanked. So this basically means, roughly 25% of the world’s population doesn’t have a bank account. Without access to the financial system, which for most people in the world is through a bank, then of course it’s extremely difficult to develop a credit history and a credit score. The lack of this information makes it difficult for the unbanked to access credit, which means to borrow money, leaving many mired in the same financial situation. So the exciting thing is that FinTech paired with mobile technology can help solve this conundrum. With the rise of mobile phones, and particularly smartphones, and the shift to digital banking, there’s a lot of opportunity. So for many of today’s unbanked, most of them may never, or at least rarely, access a traditional brick and mortar bank, but increasingly many will patron digital banks, even online-only banks, and other digital financial services via their mobile device. This is, and will be, incredibly empowering for many of the world’s neediest populations, and one of the great potential democratizing aspects for FinTech – giving people more opportunities. Now, for people that may be using mobile devices but still not yet fully integrated into the financial system, or with only minimal financial data, there is still the problem of trying to determine their credit. So one alternative to traditional forms of credit analysis is the rise of social credit. In its simplest form, social credit basically means that any kind of data, not just financial, can possibly be used to determine some level of credit. For example, your Facebook network, and your relationships there, the type of people you most frequently message on your phone, or the amount of time you spend watching Taylor Swift videos on your phone, and a whole host of other behavioral and relationship knowledge, that is not necessarily financial, can be utilized by AI-backed algorithms to compile a profile on you – a social credit profile – that may have an impact on your financial and social life. Sounds fascinating, but is this okay? What are the benefits? What are the risks? Aspects of social credit are being rolled out in various ways already. At a national level, China is implementing its own indigenous social credit system, a reputational score system that applies to individuals and companies, with the intention for it to eventually score all of its citizens when the system is fully developed. The early stages of this social credit system have already garnered attention as almost 10 million people have been banned from domestic air travel in China alone, and this is all based on their social credit score. Other potential impact includes, limiting access to certain educational opportunities, or employment. And social credit scores could even impact one’s Internet speed. It’s not just nation-states, but really private sector actors are leading the charge. Ant Financial, one of the world’s largest FinTech companies, and related to Chinese technology giant Alibaba, has also started developing its own form of alternative credit, dubbed “Sesame Credit.” In addition to traditional financial information that something like a FICO score might include, Sesame Credit also incorporates other information like the online behavior of a person, especially in the context of their activity within the Alibaba ecosystem. A high Sesame Credit score, improves the users’ “trust” level within the system and facilitates access to Ant Financial products. But you know, China is not the only place where social credit analysis is growing. Even in Silicon Valley, you can observe aspects of social credit. Dealing with myriad issues related to fake news claims, Facebook has developed its own rating system to gauge the reliability and trust of its users. And one criticism, however of this is, that even if such a tool might be necessary, it’s not transparent. And this is something we’ve discussed before about this idea of transparency. The use of social credit will continue to expand, either as a direct proxy or at the very least a supplement to traditional financial credit. Maybe nowhere is this more apparent than in the peer-to-peer (aka. P2P) lending market, which is another important part of the FinTech landscape. Many P2P platforms incorporate some aspect of social credit in their models. For example, one of the larger P2P platforms, Lending Club, which is listed on the New York Stock Exchange, was originally an application on Facebook that spun off. Prior to its IPO in 2014, Lending Club frequently mentioned that social relationships were an important part of its model and that social affinity and other non-financial factors helped lower the risk of non-payment. As P2P platforms grow, more data becomes available, and AI capability enhances, it will be interesting and important to consider how social credit will be used in the future to influence our lives. Additional Readings 4.5.2 Social Credit – Subjectivity of Morality
So revisiting our earlier example about purchasing a home. Imagine, you go to a bank to get a home loan. And in addition to the financial information you give to the bank to evaluate your credit worthiness, they ask you for social information about your behaviours on your phone and your computer, what website you frequent, what kind of games you play, for how long, what kind of music videos you watch on YouTube? How would that make you feel? And how would that impact how you live your life on a daily basis? But Dave, how would you feel, if this was the situation you were in? – This is, I think, to be perfectly blunt, I think it’s kinda scary and it’s something that I don’t really get caught up in a lot of the more dystopian future of AI and stuff. I feel like we’re probably a long way off from that. But this is one area from a behavioural modification standpoint, I feel like there are pretty concrete examples historically, not even that long ago where broad scale social change through behaviour modification especially when looking at peer groups, family members, educational history, religious or other beliefs that lead to fairly broad dire consequences. So, let’s start with the good stuff. Let’s not be too negative. Some of the most successful example of this in Africa, in developing parts of Asia is a really really simple aspect of social credit which would be whether or not you pay your mobile phone bill. If you are in Kenya and you’re utilising one of the mobile banking payment platforms and you don’t have a bank account, whether or not you pay, your mobile phone bill each month is probably the best example of your credit worthiness. – So how likely you’ll repay. – Exactly. – Because you pay your phone bill for the last year. – I thought when that came out, I thought it was brilliant and I think you know, millions and millions of people have benefited from that aspect of social credit scoring. On the flipside, if you look at some of the other examples of this where they’ll look at browser history. They’ll look at how many hours you spend each week playing video games. They’ll look at what I would consider more moral decision making, that’s the type of stuff that concerns me I think. – Okay and so, is the concern about, when we think about morality, frequently people think about who gets to decide what’s moral or not? Is that the concern you’re referring to? – That’s exactly right. I mean think about it. If you’re home right, who is to say whether your behaviour is specifically good or specifically bad especially when we’re talking about accessing credit. Some of the examples they gave is playing video games is bad and so therefore people that play a lot of video games should be less worthy of credit. Even if I agreed with that on a personal level which I don’t necessarily, it’s very dangerous to think that a small group of people, probably men who we don’t really know who they are or what they’re discussing, they’re going to be the ones to determine what is moral and therefore what is acceptable in society and as we’ve already discussed, this can have extremely broadly implications in terms of whether or not you can buy a house or whether you can get a visa to travel outside the country or in some cases even determining what types of majors you can have, what types of careers you can enter into. – And I think returning to something you didn’t mention about the potential risk of the impact. We’ll start modifying how people act and behave. I think it’s really important. You know, philosophers from a long time ago to more modern philosophers have talked about this idea of what observation does to people’s behaviour. Even though nobody is coming and compelling you physically to do something, the fact that you feel like you’re being watched even you may not be being watched but the fact that you think you’re being watched, actually starts shaping your behaviour. And that is a very interesting as well as scary proposition. – Potentially and again, now again, not to be too negative here because the reality is that you and I, we generally conform to the best aspects of human behaviour. That is why as a species, generally speaking, we get better and better. There is less violent crime right now. We tend to mirror the best elements of our humanity. But I’ll give you quick example, so my father one time when we were, well not once, he used to say this a lot as I was young. He would take me to go perform service within the community and I like many teenagers would go quite begrudgingly and I’d been, you know, complaining the whole way. And he would say to me “If you don’t want to do this, then this is not going to be something that counts as a benefit to you.” Meaning that I had to actually want to do it in order for it to be service that would benefit me kind of spiritually or you know, psychologically. And so this runs into the question of when you are trying to modify behaviour from ethics context, can you compel people into a certain type of behaviour and then make them good? Can you compel people into goodness or do you have to educate them and inspire them into goodness? – I see that’s interesting. So the idea goes to kind of internal inherent motivation that the person has in the action even though both people may be doing good things, we actually think maybe the motivation for doing the good thing sets them apart. – Yeah and if history is any example, when societies have tried to compel a good type of moral behaviour, that often has led to really some of the most dire consequences socially speaking because people will not feel that inherent sense of shame or morality in their decision-making and instead they often look to avoid those things and then often become very disassociated at society. And it can create some very significant perverse incentives. – Okay, is that similar to this idea of a checkbox morality in a sense of if I’m doing these things that are supposed to be good in society, I’m a good person where in fact, just because you’re checking the boxes may not mean that you’re actually a good person. – Yeah, well there is two aspects of it. One is the checking the box, so therefore feeling like I’m good as long as I tick the box, then I’m therefore a good person and then anything outside those okay. It’s justified because I’ve ticked the boxes. But the second one which is slightly more pernicious is the idea that we’re ticking the box to tick the box but we know that that is not necessarily what our true intent or true desire is and that’s when I think some of the more malicious stuff can come through, and again there is examples of this historically where you could have examples of genocide or significant inequality that’s perpetuated simply based on false definitions of morality, you know, just to give an example for those who are confused at home, what if based on my sense of morality, I believe that a particular minority race was not worthy of voting, not worthy of financial credit, was not allowed to own property, right? I could say that god has told me this is the right thing to do and that is my definition of morality when in actual fact you know, we hopefully as society would say that’s actually a pretty terrible thing. – Yeah and to your point that’s happened myriad times – Many many times, very recently. – Across the history of humanity right? A lot of that discrimination potentially was based on religion. Some of that was based on how we look, where we were born. – Political perspective. – Exactly. Additional Readings 4.5.3 Social Credit – Accountability
The other kind of concept that is interesting to me relating to social credit is, on one hand, you’re right. I think social credit has been incredibly enhancing for populations that can’t access traditional forms of financial credit. Thus, limiting them from accessing money, like you had talked about. – Yeah, yeah. – The thing that is a bit, not disturbing, but gives me pause about social credit is that we are using social credit as a proxy for financial credit, or financial data. – Yeah, yeah. – And whenever we use something that’s a proxy, generally, it’s rare that it’s one for one. So if we see a one here, and we wanna look at the proxy, that the proxy will also match up exactly. There’s usually some slippage, right? Or some parts that don’t overlap. – Yeah. – And what I’m afraid of is, if we inculcate the idea that using proxies are somehow the same thing as using the real thing, and that kind of leads into the ethos of how we think about AI and FinTech and these technologies, then we can really find ourselves in a situation where we assume that’s okay, but in reality, the proxy and the real thing actually don’t overlap that much. – Yeah. And then we become far, we move farther and farther away from actually what the real objective was. – Yeah, and I think, of course, that can be right. I think, on the flipside, there are a lot of examples where traditional credit scores have been shown to be problematic. False information, you know, identity theft, and anyone who’s had their identity stolen before knows how incredibly difficult it can be to clean up your credit. And so I think at the end of the day, what we’re saying is, social credit and other AI and machine learning-based credit rating systems can be incredibly, incredibly powerful, and can bring people to the financial markets that have never had access to it, but, like everything else that we’re talking about in this course, it requires those aspects of transparency, trust, proximity, to ensure we have the rules right up front, that we’re thinking about those things up front, so that we’re building a better system, and not entrenching these biases into existing systems. – And I think that is critical, what you just mentioned. Because what will happen, and what has already happened, even before the advent of AI and these other technologies is, if some sort of interesting process or non-AI technology came into existence, then it tended to get rolled out to other parts of the world. – Yeah. – And if a particular form of AI or some technologies that seems to be effective, then it’s very easy for that model, for that algorithm to start propagating into sectors and industries and geographies that it was never intended to do. – Absolutely. – But we just assume that it’s okay, because it worked well in California, or England, or Australia, or something like that. – Yeah, or a particular industry. Mortgages, use it for car loans, and stuff, yeah. – That’s right, and we assume that it will be a very easy transition across industries or sectors, where actually, that’s not necessarily the case, and in fact, there could be, cause more danger. So it goes back, this idea of scalability. Now we’re really scaling across the world, across geographies, across industries, and then across people, ultimately. – Yeah. Additional Readings 4.5.4 Social Credit – Privacy
We have another question for you. What are the implications of such systems? So Dave, what do you think? What are the implications of these social credit systems that we’re talking about that are backed and powered by AI? – Well okay so let’s go back to what we were talking about with say Taylor Swift and (Jacky) Cheung and these concerts right. – Oh always, that’s always an interesting topic. – The idea that for example what we talked about back then was from a security standpoint, the guy gets caught right? So that’s a good thing and one of the questions you asked me was, well, why should it be a big deal if the bad guy gets caught? You know we shouldn’t want him there in the first place. My response then is really how I relate to this now is, yes absolutely, we want to have as secure an environment as possible, but I think we at least have to ask at what cost? Right, the idea being that if we are granting this incredibly broad level of, not granting we don’t have any control over it, but if there’s this broad level of surveillance. If we have this mobile technology on us all the time, and if we are now going to be introducing social aspects of behaviour into not only, I mean really rating us, literally rating us. I think these are some things that we at least need to think about collectively as society to understand what are we giving up for this right? – Okay now, that’s an important question. That we all need to consider I think about. What is the sacrifice we’re willing to make to have this increased security. In the context of the concert that we talked about where this massive video surveillance. I think one compelling aspect of this is the AI is powering the video surveillance making it much more effective and now we have aspects of social credit about our behaviours. So I think in the future you could easily see a situation where those are paired. – Yeah. – As a broader way to surveil or control populations. – We might have higher or lower credit because you like Taylor Swift. – That’s right. – Right. – And that composite is created through many more data points now. Through surveillance that’s happening, where you’re frequent, how frequent you go to 7-Eleven. – Yeah. – On a particular day and particular time, right. Or how frequently do you stay out late at night? All these things are now going to be able to be captured more through observation, as well as through the actual data that you are creating through your own usage of various devices. – Yeah. Module 4 Conclusion
So, what does the future hold? Of course, I mean, no one really knows exactly. But what is certain is that AI is gonna be a big part of it. Famed inventor and noted futurist Ray Kurzweil, predicted a technological singularity will be reached in 2045. Such a singularity basically represents a future where AI powered super-human intelligence is so powerful that it will create even more innovative technologies, which could possibly lead to very new realities that change our assumptions about intelligence and perhaps the nature of our very existence. This could be the path to the utopian existence only portrayed in science fiction movies. Now that said, there are those, like Nick Bostrom, a philosopher at Oxford, and Director of the Future of Humanity Institute, and also author of Superintelligence, or Elon Musk, who most people know that have expressed serious reservations about the post-singularity level future. Even Stephen Hawking once mentioned, ”The development of full artificial intelligence could spell the end of the human race.” Wow, that sounds scary. Well, this doomsday scenario is largely motivated by the possibility and fear that AI may become so advanced that we may not be able to control it, and perhaps such AI will eventually want to manage and control us. Now additionally, there are other AI related concerns, many of which have already been touched on in this module, ranging from fairness to privacy to displaced labor. As an AI-dominated future becomes more imminent, communities of concerned technologists, lawmakers, and other interested parties are coming together to grapple with and define the ethical issues surrounding AI. This is happening in China with the formation of a national level AI ethics committee and other examples include the European Union’s High-Level Expert Group on AI, which has released its own guidelines on the ethics of AI, and even in the US, there’s an organization called The Partnership on AI, which is a collection of leading global companies and institutions working together, for a stated purpose of “shaping best practices, research, and public dialogue about AI’s benefits for people and society.” So what’s next? In his best-selling book, Zero to One, well-known Silicon Valley entrepreneur and investor, Peter Thiel wrote: “…humans are distinguished from other species by our ability to work miracles. We call these miracles technology.” AI makes the possibility of such miracles much more real. And ultimately, the future is not fixed, nor its outcome certain, and because of that –each of us, you and us, have the opportunity to shape the future. And hopefully this module has compelled you to further consider the parameters and maybe even limitations, that we need to place on these technologies that have the potential to do so much but potentially at great cost as well. Additional Readings Bostrom, N. (2014). Superintelligence: Paths, dangers, and strategies . Oxford: Oxford University Press. Hauer, J. (2016). The Funny Things Happening On the Way to Singularity. TechCrunch . Retrieved from https://techcrunch.com/2016/04/09/the-funny-things-happening-on-the-way-to-singularity/ Metz, C. (2018). Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots. The New York Times . Retrieved from https://www.nytimes.com/2018/06/09/technology/elon-musk-mark-zuckerberg-artificial-intelligence.html Metz, C., & Isaac, M. (2019). Facebook’s A.I. Whiz Now Faces the Task of Cleaning It Up. Sometimes That Brings Him to Tears. New York Times . Retrieved from https://www.nytimes.com/2019/05/17/technology/facebook-ai-schroepfer.html Hawking, S., Russel, S., Tegmark, M., & Wilczek, F. (2014). The Independent . Retrieved from https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html Ethics Guidelines for Trustworthy AI: High-Level Expert Group on Artificial Intelligence (2019). European Commission . Retrieved from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai Knight. W. (2019). Why Does Beijing Suddenly Care About AI Ethics? MIT Technology Review . Retrieved from https://www.technologyreview.com/s/613610/why-does-china-suddenly-care-about-ai-ethics-and-privacy/ Peter, S., et al. (2016). Artificial Intelligence and Life in 2030. One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel. Stanford University . Retrieved from http://ai100.stanford.edu/2016-report Araya, D. (2019). Who Will Lead in the Age of Artificial Intelligence? Brookings Institute . Retrieved from https://www.brookings.edu/blog/techtank/2019/02/26/who-will-lead-in-the-age-of-artificial-intelligence/ Module 4 Roundup
– Hi, welcome back. Module four’s round up. We’re excited this week for two reasons. One, as we mentioned last week, artificial intelligence is one of our favourite topics and there’s a a lot of implications there about the future, about society and a lot of things are really important to us. So hopefully, its been meaningful for you as you’ve gone through the module. But perhaps most importantly, we’re here in Hong Kong. David Bishop and I are teaching a class this week related to FinTech. And we were kindly joined by some of our great students to join us in this week’s roundup to discuss some of the questions that you’ve shared for us. So maybe David, you can kick us off with our first question – So as always we’ve loved the comments you guys have had really appreciate you sending them out there some of the comments you’ve had we kinda wanna throw out to our class. They’re from all over the world, really diverse group. And so the first comment that we had was about surveillance and really relates to some of the AI things we’ve been talking about this week. We’re constantly surveilled, there’s cameras everywhere, there’s ATMs on the street. So what do you think about that? What are some of your thoughts? Is it a little bit scary, is it better because it makes the world safer? What are your perspectives on this in terms of the utilisation of facial recognition and A.I. in our everyday lives? – Yeah so I guess it depends where you’re coming from in the world. If you’re in the States, I think people would be really scared and kind of be against this. If you’re from China, some views are that if you have nothing to hide, you have nothing to be scared of. But I think there’s two main things. The first thing, if you’re from the States in particular is your rights, your freedom. Some people have been seen in the news that they try to cover their faces when they see a camera and police officers actually force them to show their faces, – Right which I think is not right so to speak. – Yeah Another thing is these cameras in the public can be tampered with. I think they can be used as blackmail with police officers and law-enforcement. For high, high-level figures in the world. But there’s also pros with having it as well. – Yeah – So you know, uh, public-safety… – So you personally, how do you feel? – Personally, I think it’s okay I have nothing to hide. But I do feel that, ’cause I am from both Hong Kong and the United States that if I don’t wanna show my face on the camera I shouldn’t be forced to. – Yeah – But I don’t have an issue with it. – Yeah so for the class out there what he’s referencing is during our class here we actually showed a video, from London of all places, which kinda surprised us. Just from three weeks ago, where they had set up a police area outside on the street, and they were requiring everyone walking by to go through facial-recognition software and a gentleman didn’t want that he thought it was an invasion of his rights, so he pulled up his jacket over his face, he pulled his hat down and the police actually used that as probable cause to detain and question him. And so it’s kind of a catch- like lose-lose scenario. Either, you let them scan your face against your will or you run the risk of them using that as probable cause in order to question you. So Cameron’s saying that generally speaking, not a big deal if they scan us and stuff because it keeps us safer, nothing to hide. But, if somebody wants to hide their face, you think that’s perhaps what they should be able to do. Is that fair? – [Cameron] Yeah Okay so Kate, what do you think? ‘Cause you’re from Shanghai, from China, cameras everywhere, what is your take on this? – I think I still think that, uh, how to say– I’m going to focus on– intuitively it’s very scary, right? Everything you have that shows on surveillance and everything. But, actually surveillance happens like hundreds, hundreds years in different different ways. – Yeah – So I still think the technology or say the tool, is not the centrepiece. The centrepiece is that we be mindful of this kind of risks, like Cameron just said. And to do the right thing. – Okay so one twist on this question and I’m curious if anybody has any thoughts. What we haven’t talked about in this course yet is the introduction of these types of surveillance with deep fakes. So, some the newer technologies, they’re able to take anyone’s face and then put another person’s face on that video and it looks extremely realistic. They’re using this in Hollywood, really extensively now. And the technology’s getting cheaper and cheaper. So in– it’s likely that certain people could potentially be framed for a crime or use these types of external surveillance maybe, against someone that they wanted to harm in some way. Does that give you any additional pause? Or is this just something that’s you know, maybe, something we can’t do anything about? – I think it’s inevitable that it will happen in some form or another. It’s really up to governments and regulators to have tight controls on this sort of misuse of AI and so-forth. But, I have faith in governments around the world that they’ll be able to control it and ensure that the usage is for the appropriate reasons. – Yeah, so our next question is about how do we regulate some of these technologies, so we’re talking about artificial intelligence, David brought up this twist on that previous question about deep-fakes. And how do we regulate these? There’s different countries in the world, different jurisdictions – Diverse group here – A diverse group here, and, people have different opinions on this so, you know, will there ever come a time when perhaps we can have a uniform rule or regulation that will cover this globally? Is that something that we… practically that could be difficult, but is that something we aspire to do? So, maybe we start with that question. Because it’s quite important as a fundamental starting point in how we think about, perhaps regulating some of these technologies. Do any of you have any thoughts about that? – Yeah, I guess a lot of companies nowadays how they operate is not really restricted to a physical space. I think back in the days, you know, you think about the retail shop, then it’s confined into a physical space. Let’s say, if a shop is operating in New York then they follow New York law. And then if they’re operating in California, they follow California law. But let’s say, nowadays, everyone’s shopping in Amazon everyone’s shopping in different online websites and at least in the states, the bar-exam and then the law is kinda specific to each state and then if some sort of a crime some sort of incident that happens what jurisdiction’s law do we go by? And then if we go beyond that, one country people, we buy things from, I buy things from the UK, I buy things from Hong Kong then, who are the law-makers or the regulators to really regulate, or what law to follow or what guideline to follow and when there’s some inter-country incident happening which guideline is like, is the, I don’t know, like the golden-rule? – Mm, yeah – To dictate, so I guess– – It’s really complicated, yeah. – So do you think there should be a universal law at some point would that be the best way to deal with this? Or, can we rely on law as kind of the way we deal with this, or should we— are there other mechanisms perhaps, that we should think about? – Maybe we need a supreme AI overlord who can just determine all those things for us, maybe. – Yeah, I mean, I don’t know that’s a good question, I mean I don’t have a answer to this, I guess. – Do you think that it’s even feasible that there will be like global standards? – [Carl] So, yeah, for instance it should be very complicated, like even for nuclear weapons, we not all agree about it. So, how it’s gonna work for FinTech or AI it’s gonna be very complicated I guess – Yeah Now, there’s a lot of money involved though and so you will see that typically where money is involved and where cross-border commerce is involved those are the rules that are typically the most uniform. So I think the best example would be intellectual property and you have really large kinda multi-national organisations like the WTO, or other global bodies that kind of force companies into obeying certain rules. Do you think that the EU, and the US and China perhaps would be powerful enough at some point to kind of force everybody into adopting, like… ‘Cause you don’t have to convince everybody you just need to convince a couple core powers maybe that could be plausible? – It’s complicated nowadays, like when you see already the trade-war China versus US, who gonna take care? Who gonna take over? Who gonna decide? This is a good question. – Yeah – So… – Okay – Yeah – So I think the MiFID II example is quite interesting actually because as, Shannon was talking about The PII, so personal identifiable information and uh– because that actually ties directly in to this idea of artificial intelligence is the data that’s there. – Yeah – Because that qualifies PII, – Yeah – So that actually becomes quite an interesting question to think about. Because if it does then it will fall automatically under an existing regulation – Yeah – And if it doesn’t then, why not? Because, we could say our birth-date, address, national-identity number, you know, is personally identifiable information, but clearly our face should be a form of PII too, right? So it does raise some interesting questions. – Yeah and I do think it also gives me some hope that there is the potential for more uniform guidelines going forward. Because if you think of capital markets, right? There’s a lot of uniformity, because if you have a foreign, or overseas company that’s listed, say, on a USA stock-exchange then they have to adopt some of those rules contract rules are fairly uniform. Again, intellectual property. Product liability rules, even like development and production of products. So, basically if these countries do wanna do business back-and-forth and AI and information-technology and stuff is gonna be more cross-border in nature so I think it is actually very plausible that there will be some type of standards going forward. I guess that the question is who’s gonna be able to push those things and enforce them, right? Enforceability is always, I think, the biggest challenge when you’re dealing with cross-border things. Yeah – There’s different definitions of PII there’s no single definition of PII. – Right, currently, Yeah. – So, yeah I mean, one country can impose like thirty different fields that’s PII. – Yeah But another country may think that that’s not invasive, so who are–who’s the authority to say what’s the right level of control what are true PII, if there are such thing – Yeah and what are some bogus PII? – Yeah, and this actually is a good tangent to our third, kind of, topic. Because, one of the topics that students were commenting on was really about authority and power. And, specifically in terms of introducing AI in order to reduce human-bias but unfortunately the data that creates a lot of AI data-sets is already biassed, and so potentially could re-entrench existing bias. And so, I’m just curious from your perspective So, some of the things we’ve talked about include mortgages that are built off-of perhaps biassed, racist data then create AI systems where the computer then decides to give worse loan terms to a minority, let’s say. Kenny has a technology background, do you think that introducing artificial intelligence, machine-learning, and these kind of amoral, non-human actors is going to increase a more fair and transparent system, or is the data so tainted that it’s only gonna entrench these human biases and kind of, make even worse outcomes for some people. – From a technology point of view, as we know the AI model is actually coming from, you need to fit in a lot of data, okay, and those datas are based on the past history for example, from a bank, the mortgage-approval history transactions So if there is some bias in the very first place for example, the approval manager keeps rejecting mortgage applications because of the race, because of the background then those data will actually fit into the model and then the model, will have the bias. – [David And Teacher] Yeah – And then, but, the thing is, in the ethical point of view, what about the bank, or the government? Who’d like to change this kind of bias. Okay, so this is quite hard to say but okay, this is fine, we are used to do this this is what we expected, the result or, “hmm, this is not good, we need to change it” I guess it depends on the bank or the government, how they treat this to make it a fair judgement . – Yeah, okay great and as someone who’s again, in technology do you think that there are things that we can do now to hopefully ensure a more transparent and ethical system? – Of course I believe the government has to take the lead. To educate the banks, the organisations who use AI. They have to promote the fair use, or the ethical side of AI use and AI. And then, regulation also from different point of view. Discrimination, racism, different kind of the aspects They have to set very clear direction and regulations But it takes time. To do that. – What can we do? Let’s assume– I think, everybody that has thinking about these AI type of issues about data-bias and things like that I think we all understand that the basic issue in terms of if you get bad-data in, bad outcomes potentiate. But let’s say that happens though unintentionally. What is the process to address that? Okay, Carl, Yeah. – So, in fact, the good first thing is to be able to realise that thanks to AI, we are going things wrong in the wrong way. So we are making mistake, and to recognise mistakes. And then, we gonna be able to build on that. And then to recognise that with ethic, we gonna be able to build the right model not to have discrimination against, about gender, about racism or whatever. Think it’s a good point. And to bring new data to build a model which gonna make the world, let’s be uh… The world a better place (laughter) – There we go, a typical Silicon-Valley mantra okay, so kinda, concluding thoughts give us like, one or two lines of why maybe you’re excited about AI and its potential for the future. Do you wanna give us an idea? What is it about AI that kind of excites you from a FinTech standpoint? – From a FinTech standpoint, we think it’s actually making our life better in many ways. Like the payment system has been changing a lot over the past few years. But, to a certain extent I found that with the AI there are a lot of data-sharing issues, privacy issues that also arise at the same time. So there is always to be a balance between the AI usage as well as how it should be used in an ethical way. – So based on what we’ve discussed already, I think that one of the most powerful aspects of FinTech is the inclusivity it brings to people and especially to people who don’t have access to traditional financial services and that’s one of the biggest benefits. I come from an emerging market, Bangladesh and we talked about that at length in this course. How FinTech has brought so much access to financial services to the un-banked. And I think that’s one of the greatest potentials for all the risks such as bias and other things that marginalise people, there’s a huge positive to FinTech that actually brings people access to financial services to improve their quality of life. – Yeah it’s interesting so I’ve always thought of Bangladesh as a good example because, University of Hong Kong, we’ve got a great university, and yet no Nobel-prize laureates, and yet University of Dhaka, also a great university, but I think, two, and a lot of times the most beneficial emergent technologies happen in emerging markets. Because it’s really just out of necessity, right. So if you think of Muhammad Yunus he’s very explicit, he’s like “I wasn’t trying to invent Microfinance, I was just trying to serve the needs of a lot of people” And so I think some of it for me, again, I love the developing space as well and I think some of the most interesting utilizations of these technologies are really in that space because the potential impact is just astronomical it’s really cool, yeah. – And that’s also brilliant in the sense of, going back to the topic of eliminating bias in AI because, what, David you were talking about the bottom of the pyramid. They’re collecting data from the bottom of the pyramid. – Yeah – They’re not collecting only biassed data from rich people or privileged people so that data is going to be incredibly powerful towards contributing to less biassed AI – And, just to clarify I’m not sure if this is what you meant, but there has not been data collected on them in this way before, which means it’s completely untainted. Hopefully. If it’s done properly. So you’ve just got a clean slate, move-forward. Yeah, great. So any other kind of final comments that anybody has? Or excitement about AI? – I think not only FinTech, but the technology in general now drive a lot of attention, so there is a lot of discussion, just like what we do today, actually it’s a positive element to revisit to the ethic and the risk and also the current system. What is the issue with the current system? Why the current system, for example the financial system, didn’t serve some… – Yeah – People – Yeah – Where’s the gap? And then, challenging the business as usual actually can have a lot of potential to add a lot of value to the economy and also as a expense. – Yeah – Great, well we really appreciate our students for joining us in this roundup We really wish all of you in our online course we could actually meet with you in this kind of setting and capacity and kind of engage with these ideas Hopefully we’ll be able to continue to do this in some ways further moving on and we really look forward to connecting again after module five, which is really important as well as we revisit some of these questions in a broader structural sense – Talk to you next week – Thank you Module 5 A Decentralized Future 5.0 Module 5 Introduction
– Welcome to module five. In this module, we’re gonna talk about some of the key reasons that people are calling for FinTech innovation, and one of those main things is the decentralised status, the nature of this, thus democratising finance and allowing regular people to participate more fully and affordably in financial transactions through technologies like cryptocurrency, non-government-issued IDs, peer-to-peer lending, things of that nature. And so in this module we’re gonna address some really big questions, considering whether FinTech should lead to a decentralised, democratised system of finance, or whether existing institutions will adopt FinTech strategies to cement their existing hold on financial markets. During this module we’re gonna discuss these major themes, including the perceived desire for democratisation of goods and services. Is that good or bad? Will there be unintended consequences? So a lot of this is gonna be a continuation of things that we’ve talked about in other modules, including module two where we talked about blockchain. So we’ll be referencing back to those from time to time. And one of the key things that we’re gonna be emphasising is the sources of power. So, for example, will FinTech innovation lead to a decentralisation of power, or maybe a concentration of existing power sources like governments and banks? Or will a new concentration of power really be created in TechFins like Amazon or Tencent, which owns an app like WeChat, for example? Okay, we’re gonna start module five with a quick story. Now, I’ve been living and working within China for over a decade and travelled there many times. Last year, I had the opportunity to go to a small part of Western China that I’d never been before. So imagine kind of a rural desert landscape with kinda dust everywhere. In the morning I decided to not eat breakfast in the hotel and instead went out to get something on the street. Now, I noticed that there was a particular street vendor, an old woman, who you could tell she’s been doing this for a very long time and she was surrounded by people, so it was obvious that her food was quite popular. So, I went over there quite excitedly, I kinda watched what everybody was doing, and then when it came to be my turn I ordered some food. Thankfully I can speak Chinese and so that part of it was easy from a cultural standpoint. As I reached into my wallet and started to pull out some cash, I could immediately see the concern on her face when we both had the realisation that every other single person in that circle had paid using their phone and she did not have the ability to give me change with cash. So here I was, probably the person who had most access to the financial industry, and yet I was the one that was completely cut out of the transaction and out of this marketplace. Now, I found this experience, even though I couldn’t get my breakfast and I was a little upset about that, I found this experience super cool because this community in rural Western China had in a short amount of time moved almost completely away from cash. And indeed anyone that’s been to China recently knows that most communities are the same way, and the government, for those that have been there know that they’re actually very supportive of this change. So, we find this story is gonna be a good summary not only for the things that we’ve discussed in modules one through four but also it’s gonna be a nice transition to help us start asking some of the bigger questions that we’re gonna be analysing in modules five and six. So, before you move on to the next video, we just wanna ask you, think about the story, and from a FinTech standpoint what are some of the observations that you have, specially about how FinTech is impacting local people, average people in rural communities or advanced communities all over the world. Additional Readings He, D., Leckow, R., Haksar, V., Mancini-Griffoli, T., Jenkinson, N., Kashima, M., Khiaonarong, T., Rochon, C., Tourpe, H. (2017). Fintech and Financial Services: Initial Considerations. IMF Staff Discussion Note . Retrieved from https://www.imf.org/~/media/Files/Publications/SDN/2017/sdn1705.ashx Chuen, K., & Lee, D. (2017). Decentralization and Distributed Innovation: Fintech, Bitcoin and ICO’s. Stanford Asia-Pacific Innovation Conference . Retrieved from http://dx.doi.org/10.2139/ssrn.3107659 Magnuson, W. J., (2017). Regulating Fintech. Vanderbilt Law Review , Forthcoming; Texas A&M University School of Law Legal Studies Research Paper No. 17-55. Retrieved from https://ssrn.com/abstract=3027525 5.1.1 Is FinTech Leading to Inclusion or Exclusion?
– Okay, welcome back. Now although that was a quick story, we hope that you had a chance to kind of think about it because we believe a lot can be observed from it. Now let’s consider a few of these things. – Probably the most immediate observation is something that we’ve already discussed, the scale and penetration of FinTech innovation is faster and broader than anything we’ve seen before. Everyone on the street has a modern smartphone, and they had all adopted the technology into their daily routine. One reason this is true is because FinTech innovation can lead to efficiencies, which in turn can help a lot of people. And as discussed in module two, FinTech innovation can help cut out the middleman which saves costs. And for the street vendor using an app payment system means not needing to handle cash, which likely means to reduce risk of theft or for food worker is more sanitary. And in many cases, using cashless payments, it’s just faster and more efficient, leads better service. So basically, using a mobile payment system, help her business be more efficient and hopefully more profitable. – But FinTech innovations can also lead to exclusion. I probably had the greatest access to traditional finance, whether cash, credit and other loans, that anyone on that street thought they had, yet I was almost completely excluded from the marketplace, not even able to purchase breakfast. Like in this example, FinTech can lead to separation from financial markets, and therefore basic necessities. Although buying breakfast is a simple transaction, there are many layers of filtering in the story. For example, if you are required to pay with the phone, then guess what, you have to have a phone. And then you have to have the app WeChat and then an account on WeChat, and then money in or credits in that account. So one of the interesting challenges in the FinTech industry faces concerns access to these technologies. While many are hopeful that FinTech innovations will lead to better access to finance for the masses, others are concerned that it could also lead to increase exclusion from basic services. And this can be particularly true if governments decide to intentionally exclude some people from these platforms. – Now, going back to the example of the street vendor in China, we now ask a big question, the big question of the day. Will these innovations in FinTech bring the world closer together through multinational FinTech solution or will we become more and more isolated from each other? For example, credit cards have made it easy to make purchases around the world, no matter where I am. You know, I feel confident, I can make necessary payments using your credit card, though sometimes that means maybe paying high fees. – But on the other hand, David and I’ve been travelling to and teaching in China for many years. And while we love travelling there, from a FinTech perspective, every year, it seems more and more insular and disconnected from the rest of the world because of this simple paradox, the better their app ecosystem gets and the more people in China become interconnected via these apps, they’re simultaneously distancing themselves from the rest of the world. Now, this was made clear in the exercise with the Chinese street vendor, and this is happening in other countries too. Okay, so let me ask you another question. Do we think that FinTech is bringing the world together or pushing us further apart? 5.1.2 Money and Currency – Trust and Power
– Another interesting lesson from the street vendor example, was that value was being transferred, but was this money in a traditional sense? It wasn’t physical currency, that’s for sure. But there was currency backing that transaction, even if, distantly, on some cloud somewhere. This is very different than systems of payment that have existed for thousands of years, and will likely lead to the next evolution of not only how we pay, but also our perception of money. And actually, on that note, kind of an interesting vignette or an example, is even in North Korea, traditionally a country that we think is cut off from the global financial system, increasingly many people in North Korea now use cell phones. And frequently they have top up, like in other countries in the world, to buy credits. But now those credits can be transferred from one user of a cell phone to another as a form of payment. And so this is also a different concept of money. – Now this reminds me about one of my favourite things about living in Hong Kong, the Octopus Card. With this little card, I can pay for just about anything. Transportation, food, and even government services. And you can see it’s pretty worn here, I’ve used this card practically every day since my family moved here in 2007. And my children almost exclusively use their card to make purchases. – While some may say that the shift to making payments from phone apps or the Octopus Card is just the next natural iteration in making payments, much like the credit card and the handwritten cheque before that, and that people just get used to the changes in the way that, children in Hong Kong are already used to paying with an octopus card for when they go to the store. We do need to understand that such changes can actually have very significant implications for society. – Now for example, there are personal implications. As we know, these developments can make it easier to access and use money, having your credit card means you don’t have to carry around thousands of dollars in cash to make a purchase, for example. But on the other hand, studies seem to indicate that people spend more money when using a credit card than when using cash. And there’s reason to believe that people spend even more money when using an app or a web-based service than when using a credit card. It’s believed that the proximity issue that we discussed previously has a lot to do with this. This is pretty common sense, right, holding cash is proximate, and therefore forces us to think about the work that went into earning the money. Thus we naturally spend less when we’re holding cash. – But beyond these personal implications, as mentioned before, there are broader societal implications to consider. Now going back to David Bishop’s experience with the street vendor in China. This was a great example of disruption of the many formidable institutions that for millennia have controlled not only finance, but most other aspects of institutional power. Remember, there were no banks, whether physical or virtual, in this scenario. WeChat, called Weixin in China, isn’t a bank or even a financial institution in the traditional sense. That’s why refer to them as TechFins: large technology companies that because of their size, user base and overall scale, are starting to move into areas of commerce and services traditionally controlled by banks. – But not only were there no banks, there was also no physical currency. As I’m sure you’re aware, banknotes and coins can only be produced by government-approved organisations. For example, US dollars are printed by the Bureau of Engraving and Printing, and are issued by the Federal Reserve. And here in Hong Kong, our banknotes are printed by the Hong Kong Printing Limited, and issued by three banks. So here I have three $100 notes. This one issued by the Bank of China Hong Kong Limited. This by the Hong Kong and Shanghai Banking Corporation Limited, more commonly known as HSBC. And finally this one, issued by Standard Charter Bank. – Okay, so who cares? You’ve all probably held and seen foreign currency at some point in your life, and know how it can be a pain to exchange currency when going from country to country. You’ve also probably had the experience of shopping in another country, and trying to convert the cost of something from one currency to another. Now, to be honest, although I’ve lived in Hong Kong for nearly ten years, I still find myself frequently converting the price of a good into US dollars just to help me get a better sense of the cost or value of that particular item. Well the reason this matters is a combination of trust and power. As we outlined in Module One, the value of currency is really only sustained by a broad sense of communal trust. And by changing the nature of money, we are potentially altering the foundations of trust, which can have broad implications across society. – But also, this is about power. The ability to print your own currency is a significant source of power. Maybe you’ve seen a movie where criminals try to steal or create ink plates so that they can print their own money, and to be honest, when I was a kid, that was a dream of mine. As another example, there’s been a lotta discussion about the power the United States has in the world because of the outsize influence of the US dollar. Which is widely considered as the world’s reserve currency. As an example, right here in Hong Kong our money is pegged to the US dollar. Meaning the value of the Hong Kong dollar rises and falls along with the US dollar. So think about how much power that involves. It means that some folks in the US that maybe have never even been to Hong Kong, can change the value of our currency here, which in turn can affect the cost of everyday goods, housing prices, the value of your personal savings, company profitability and many many other things. Additional Readings 5.1.3 Will Governments Accept New Currencies?
– Okay, so returning to street vendor example again. On that street in China there’s no currency involved, now is that the goal of FinTech Innovation to eliminate all physical currency? Or perhaps even government backed currency altogether? If the latter, do you think that governments will just roll over and allow their control of their currency to be taken away? – Now in the China street vendor example although people were paying with the Web App the transactions were still backed by government issued currency. What happens when this is not the case? Cryptocurrencies like Bitcoin are usually not government issued, and even government backed currency. Will governments be willing to allow the use of theses cryptocurrencies within their borders? And possibly even adopt one of these currencies as one of their own? – Some nations have already announced that they would move to cryptocurrency in some form. For example Venezuela launched a Petro cryptocurrency to help the country among international economic sanctions. But the Marshall Islands is the first country to launch a legal tender cryptocurrency meaning their new currency will be recognised as legal tender, real money. And will have equal status as their current currency, you’ve guessed it which is the US dollar. We told you, the US dollar, lots of power. – Even the name of the new Marshallese currency it’s called ‘Sovereign’ after all, is a statement about power. The name was chosen to emphasise the sovereignty of the country which has a history of colonisation, nuclear testing and resulting poverty. When discussing the controversial cryptocurrency the president said “This is a historic moment for our people, finally issuing and using our own currency, alongside the US Dollar. It is another step manifesting our national liberty”. – This switch will have a lot of implications, and could be the start of a new age for money and finance. Interestingly, the International Monetary Fund, the IMF warned the Marshall Islands government about issuing such a cryptocurrency. They were concerned that the currency could be manipulated by crime syndicates and fraudulent business practises. The types of activities that have often been tied to cryptocurrencies. And also that foreign government could cut financial aid to the Marshall Islands if they broke from the US Dollar as their own e-currency. Okay so let’s stop and consider some important questions. Do you think that cryptocurrency and other FinTech solutions will ever be largely adopted by banks and governments? Or, will they lead to de-centralised future where banks and governments are less influential in these areas? Will TechFins take over the finance industry? And, will other countries adopt cryptocurrency as a legal tender? Additional Readings 5.1.4 Will FinTech Take Control of Financial System?
– Okay so in summary, FinTech is leading to some really amazing efficiencies that can help a lot of people bypass middlemen and save money, but it can also lead to exclusion within countries and exacerbate divides between countries. This is largely because these innovations are completely changing the very concept of money which is leading to questions about trust, proximity and especially power. – So where does this leave us? Will banks allow their power to be receded by decentralised cryptocurrencies, peer-to-peer lending networks and other FinTech innovations? Or will they use these developments to further consolidate their power over financial products? – Will governments stand idly by while their power over currency, personal identification, and other traditional government-based power is taken away by FinTech startups? – And how will both banks and governments react to the rise of TechFins, who seem to be growing daily, increasing in both power and profits as they expand further and further into services traditionally handled by other institutions? In this module we will explore some of these questions. But first, let’s talk about what we mean when we say decentralised, or democratised. Additional Readings Zetzsche, D. A., Buckley, R. P., Arner, D. W., & Barberis, J. N. (2017), From FinTech to TechFin: The Regulatory Challenges of Data-Driven Finance. University of Hong Kong Faculty of Law Research Paper No. 2017/007 . Retrieved from or http://dx.doi.org/10.2139/ssrn.2959925 Marous, J. (2018). The Future of Banking: Fintech or Techfin? Forbes . Retrieved from https://www.forbes.com/sites/jimmarous/2018/08/27/future-of-banking-fintech-or-techfin-technology/#5bbdbbd15f2d Ren, D. (2018). Tightening Regulations Make FinTechs Easy Takeover Targets for Banks Stepping Up Digitalisation Drive. SCMP . Retrieved from https://www.scmp.com/business/companies/article/2159718/tightening-regulations-make-fintechs-easy-takeover-targets-banks 5.2.1 Is FinTech an Evolution or a Revolution?
– Now, if you pay attention to the FinTech space chances are that you’ve heard words decentralised and democratised a lot. FinTech experts of all varieties love throwing around these terms but what do they really mean in a FinTech context? – Well, many see FinTech development as a natural process of technical advancement, much like the locomotive surpassing the stagecoach. Others see FinTech as a direct result of, and possibly even fighting against, traditional centres of financial power. Some believe the power to control banking, currency, and even our own identity, has been held by the elite few and that the control of finance has not been transparent nor democratic. – Whether as a result of the natural evolution of technology or as a direct backlash against existing power structures, the reality is that FinTech is seen by many as having the potential to completely change, and possibly even destroy, existing financial power structures. And it’s important to understand both these motivations and possible outcomes. So do you think that FinTech advancements are a natural evolution of technology or a direct result from the mistrust of institutional power? Additional Readings 5.2.2 Have We Lost Trust in Financial Institutions?
– As we have discussed many times, finance is largely built on trust and in the past, institutions likes banks and governments have served as the guarantors of trust in the financial world. But whether as a cause or effect, trust in institutions has diminished significantly in many countries over the past decade. – Now, probably the best recent example of a cause for distrust in the financial world is the global financial crisis, including all the major financial scandals that were exposed as a result of the crisis. Now, for most of us, we had to stand by and powerlessly watch as the global financial system nearly collapsed. We had a daily reminder of how flippantly certain members of the global financial community pursued profits at the expense of their customers and how government regulators were not really sufficiently protecting us. – Millions of people around the world lost their homes, their savings and essentially their futures. In the US alone, it is estimated that American households lost approximately $20 trillion in wealth as a result of the financial crisis. And as a result, it is no surprise that many of these people began to de-trust the very institutions that were meant to protect and serve them. – Now, as a personal example, my wife and I bought our first house the year I graduated law school which was around 2005. Obviously we didn’t know that we were buying at pretty much the worst time possible with the financial crisis decimating the real estate market only two years after we purchased our home. Now, when the market crashed, the value of our home dropped by over 30% and it took a really long time to recover. Well, my family recently sold our home. It was about 13 years after we purchased it. The selling price? Exactly the same amount that we purchased it for back in 2005. So, while we’re grateful that we didn’t really lose any money. There was a lost decade were many people around the world lost most of their net worth and have struggled to recover ever since. – Let’s be honest, many large financial institutions have not done much since the financial crisis to reduce our concerns. As noted earlier in this course, banks such as Wells Fargo and HSBC have had multiple high profile scandals that have gutted their customers’ trust. And it seems, every week there’s some sort of scandal that comes out that involves the financial institutions. Additional Readings 5.2.3 Can We Trust TechFins?
– Now during the past decade, as people shunned banks and traditional holders of power, they turned instead to the so-called TechFins, digital platforms like Facebook, Amazon, Google, and Tencent, that provide eCommerce, peer-to-peer lending, communications, and increasingly serve as the keepers of our digital identity. – The rise of the gig economy of social media sites has meant customers now have more control over nearly every consumer service, whether hailing a taxi, deciding where to stay while on holiday, and even how to pay their bills. These large digital platforms have transcended many traditional financial institutions, not only in terms of customer engagement, but also in terms of trust. – But after more than a decade of explosive growth, the TechFins are themselves now caught up in many scandals and are seeing their own trustworthiness questioned. And some contend that these companies are now so large and powerful that they’re actually influencing government policy and even national elections. So got a question for you. Do you trust the TechFins like Amazon and Tencent more than you trust banks? Do you think that the TechFins should be regulated like a utility? And what do you think about companies like Facebook entering the crypto payment space? Additional Readings 5.2.4 Should TechFins Be Regulated Like Utility?
Okay, so a lot of these TechFins and stuff they’re really getting just gigantic and a lot of them, you know, people have started to kind of distrust them a little bit. So one of the major conversations around this space now is, you know, should these TechFins, should they be regulated like a utility, right? – Yeah. So, first maybe, I guess describe what that means to be regulated like a utility, but then do you think that should, you know, should that happen? – Yeah, so I mean I think this is a timely question to be honest and that will continue to be a big question as people think about, we have increasingly an agglomeration or a collection of power that’s occurring and now- – They say the Big Five right? – That’s right, yeah. – As opposed to money that is collected by a slight few financially, now it’s really data that’s being collected and now all, many of us in the world who use different services to provide these large technology companies are embedded in their ecosystems, so it’s difficult to potentially extract yourself from that if you tried. And so one of the exercises we did in one of my classes recently, was for one of these large technology companies that everybody knows their name of, we listed out what services that they had. And this class was at 9:30 in the morning. And I said, “From the time you woke up this morning to the time you came to class, how many of you used the services on the board?” And we had a list of maybe 15 types of services, from this one company, and most of the class had used at least 80% of the services, just in that, whatever, from the time they woke up and the time they got to class that morning. And then I asked them the question, I said, “Imagine trying to take yourself out of that ecosystem, i.e. not use any of these products. Do you think you could?” And, the– – You’ll see articles about that, some journalists on Medium or something say, “Oooh, I stayed away – Or like, I’ve tried this for five days And everybody says, this was, um, it may be possible but there’s so much transaction friction from doing that, you know, from moving from that to another service, that nobody would do it. And that’s a challenge, right, because that generates so much data about you individually and us collectively which is what is fuelling the continued growth. – Yeah, one of the things that I think is super-cool to think about is that within our lifetime, in the lifetime of most people, certainly, you know, you that are out there watching this, within our lifetimes there have been no new utilities. So when we talk about utilities, for those who are not familiar with this, this would be services and goods that are so essential to society, the government actually has special regulations about them, and in fact, they often have like caps on the amount of profit or revenue they can actually earn. So for example, the top utilities would be electricity, water, sewage, sanitation collection, anything else? – No, I think those are the main ones – That’s pretty much it, right? – And in a lot of places in the world some of those are actually government-controlled entities. Or partially controlled government entities – Yeah, yeah, or they’ll give them like a legal monopoly or something, right. Okay so now imagine this, so what we’re saying when we say regulated as a utility, what we’re saying is has Google become so critical to society, in terms of its search engine, do we use it so much, is it so critical, that it should actually be captured as a public good just like electricity, right. Now, you have to understand, electricity was a technical innovation as well, right, as was transportation, sanitation, water, right, and at some point the technical innovation of pumping water into your home became a public good and therefore a utility. But it was started by innovators, started by private companies, right. So now the question is, have we advanced within this, social media? Is Facebook so important to everyday society that all of us in society have a stake in it? That’s really the question that we’re asking. – And so, I think to that point, you know, if we think about Google or Facebook, if we think about a single product and try to make the argument of regulation as a utility around a single product, probably not. But if you think of it collectively, so if we use Facebook as an example, you know, Facebook as a social media platform has a lot of users, but then if you think about its influence beyond that like in instant messaging, where there’s the Messenger on Facebook, there’s WhatsApp which they own, there’s Instagram which is another part of Facebook. Those are different messaging platforms that reaches, that has a far greater reach. And then, you know, one of the things apparently that Facebook is debating at the moment is if they’re going to have their own payment system. – Right. – And so if they were to roll that out across all their users, across all these platforms, then immediately they would become one of the largest financial players in the world. – Immediately. – Immediately. And so then at that point, then to the question, should they be regulated like a utility, should they be regulated like a financial institution? I mean, it starts raising a lot of interesting questions. – Well, and to that point, just very quickly, because again for those that don’t know the legal or even economic history of these things, you need to understand when we say regulated as a utility, that probably means breaking up these companies. <