Artificial Intelligence for Business: Morning Brew-ish Take

This wonderful summary of class discussions in my TO433 AI for Business course is from my brilliant student Greg Cervenak . Greg was bored in quarantine and decided to make this course reflection look and feel like a Morning Brew article. Introducing, TO 433 Brew, a newsletter that boils down the 4 coolest components of the course interwoven with some current events.

AI4Business BREW: The Highlights

#1: Benefitting from COVID-19 unemployment??

If you are a machine — yes! These past few weeks have been rough, with recent weekly jobless claims exceeding 22 million individuals in the United States due to COVID-19 financial burdens. For companies that are managing to maintain employment in these times, they will be looking to automation in the near future to decrease payroll expenses even slightly. For those who had to let a massive number of employees go, they are likely already resorting to AI and machines to inexpensively replace the jobs that they had to eliminate. Further, as the job market recovers at the end of the current recession, will companies be as willing to immediately return to the job market, or will they attempt to fill “open” roles with computers for a fraction of the long-term cost to hedge their payroll ahead of the next economic downturn. Chances are, even if not widespread, some companies will have this mentality, making it more challenging for low-skilled labor to return to the workforce. It is a great time to be a machine “looking for work” — or a machine looking to be built, for that point.

Earlier in the semester, I strongly advocated for the point that the rise of artificial intelligence would cause upskilling of the workforce rather than replacement. In fact, to quote my first discussion argument:

it’s a virtuous cycle in my opinion: increased prevalence of AI —> more education ability and fewer low skilled jobs needed —> low-skilled workers learn faster and transition from jobs that can be automated to jobs that help drive the future of artificial intelligence —> more AI —> repeat.

In these unprecedented times, I am less convinced, given that companies simply do not have the cash to upskill their workforce, and it makes more sense to decrease payroll by simply replacing workers rather than training them. This boils down to one point that we have explored a lot throughout this course: nobody knows for sure what will happen, and events can trigger major changes in the trajectory for the industry. In just 3.5 short months, my entire outlook on the future of machine learning’s impact on employment drastically changed, and I am confident as more events or technologies develop, that will continue to change.

 #2: Size matters (and simplicity too!)

Sorry, everything you’ve been told about size not mattering just doesn’t hold up in the case of AI. Here’s three examples — input data, costs of outcomes, and simplicity —  all where size does indeed matter.

First, AI is a very powerful tool, assuming it is trained on the proper sets of data. To quote class discussion: “it makes me wonder if the model is really only as good as its training data.” The answer is a resounding yes. You could technically predict the complete opposite of your anticipated result if your training data is too limited in scope or inaccurate. For example, if you trained a cancer predictor on only 10 existing patients, all of which didn’t have cancer, then you are surely going to predict that nobody will have cancer using your algorithm. As such, there must be a large number of samples in your training data that accurately represents your population. Yes, the size of your data matters.

Next, the size of your “weights” and costs matters when iterating your algorithm. In TO414 Advanced Analytics class when we spoke about cost matrices, we largely used arbitrary numbers to represent the methodology that R uses to present these matrices. However, when creating an algorithm for something as sensitive and “life and death” as cancer, the magnitude of the values chosen in cost matrices become very important. Indeed, the cost could be set to what a legal claim would cost for an incorrect “you don’t have cancer” or what the cost of incremental treatment would be for proceeding with treatment that wasn’t needed, with the legal action likely having a much higher magnitude. Therefore, to perfect the outcome of algorithms, you must determine these figures and use them wisely. Yes, the size of your figures matters. 

Finally, this concept is not as much size, but more simplicity, but it fits best here. I remember being very frustrated in TO414 Advanced Analytics class with R and some of the complexities required to run simple regressions. However, once we started to use Microsoft Azure in TO433, I started to really recognize the value of artificial intelligence and how it could actually be easy to implement. Any CTO or executive who needs to make decisions on AI will likely be looking to simplify their current workflows, and the easier the technology does this, the more likely they will be to “pull the trigger” on a new system. As such, when designing new technology infrastructure, simplicity should be emphasized to ensure the adoption by employees is smooth. Yes, the size of the challenge matters. This time, keep it small. 🙂 

#3: Which came first: chicken or egg; trust or performance? 

Many conversations in this class have been some flavor (chicken flavor??) of the question: how do we build trust in AI? Some simple Google research on this topic reveals several academic journal articles that argue that people trust AI more when it performs better (low defects and high security) and an equal number of people who say that performance is predicated on increased use, which means that AI needs to be trusted and used before it can be largely successful. Both arguments are technically right, and jointly lead to the construction of a virtuous cycle that looks something like this:

increased usage → better data → better performance → more usage (due to performance) → more cash flow → more investment in security → increased trust (due to performance and security) → increased usage, and so on, in a cycle.

Business School 101 says: “congrats, you’ve won because you’ve built a virtuous cycle.” However, there is a crucial question that remains. Where do you enter this cycle? Do you make performance better to start the flow? Increase security first? Force the usage on more people first without them trusting it? Indeed, trust and performance are linked, but it is challenging to know where the best place to start is, and you will likely get a different answer based on who you ask. Even in class, we have had various arguments about whether you need to increase trust first to enable performance or whether increased performance will inspire trust. Clearly, they are dependent on each other, placing this cycle at a standstill. So, the question remains: which came first? Trust or performance? 

#4: N = 1 and R = G in a world where N = Not Spending and R = Recession

(Note: N=1, R=G refers to the core thesis of the book: New Age of Innovation. A very short explanation: technologies are leading to a N=1 world – a sample size of 1 as products and services are increasingly personalized to create individual experiences for each consumer; for delivering N=1, companies need to build global networks and platforms as no single company has the resources available to deliver N=1. Hence Resources = Global or R=G)

After nearly 11 years of economic expansion, a time where some of the most innovative technologies known to man came to be, we find ourselves in a new reality. China just saw a 6.8% contraction in GDP due to the Novel Coronavirus, and the impact in the United States, where now 1 in every 400 citizens is actively infected, is expected to be much greater. Just 5 weeks ago we were discussing how N=1 and R=G, and how these frameworks help us understand the rapid expansion of artificial intelligence and the extensive amount of personalization that these algorithms allow us to enjoy. However, the mentality of many firms, governments, and individuals, have changed to a new mindset.

Gone (or temporarily paused) are the days of individual achievement as the world comes together collectively to defeat COVID-19. There are many pushes for universal action: global universal healthcare, universal stimulus that impacts everyone, supplies for all, democratization of rations, and more. We talked in class discussion about how simple events can forever change consumer behavior, and if this move towards collectivism sticks, will there still be as much of an emphasis on N=1? Perhaps N=1 will be seen as strange, selfish, too personal, or not in alignment with global goals? Furthermore, as the economy recovers, there will be a contraction in cash, which is a resource that is at the core of funding technology projects. Will they be put on pause for the foreseeable future? Will technology investment rather be focused on cost savings and automation rather than customer-facing N=1 personalization? We are going to learn a lot about the future of AI as the COVID-19 situation evolves (and ends). For now, N = patience to wait and see.

TLDR: Nobody knows what’s next. 

As much as TO433 course has taught us, numerous questions still remain. With one event or one change in technology completely changing the entire artificial intelligence landscape, anyone who claims to have a full understanding of this field is full of s*%t. So, what I am most thankful for about TO433, is not how much it taught us, but rather how much it made us aware that it did not. There is a whole world of AI out there — the door is now open for us to go explore it.