#48 Democratization of AI with Evan Steeg
===
Evan: [00:00:00] Will there come a point with so called artificial general intelligence or artificial superintelligence where it's not just the mundane tasks, Claire, that the AIs are doing, they're doing the creative stuff, the intellectual stuff.
Evan: Why do they need us? Right? And that's going to be an inflection point. A risk point for society that I don't think we're ready to deal with. I don't think that our policymakers and leaders are prepared for it. And I don't think most of us, regular folks are prepared for it. And I, don't think it's two years away or even five years away, but I wouldn't be shocked if it's less than 15 years away.
Claire: [00:01:00] Hello and welcome to the show, Dr. Evan Stieg.
Evan: Hey, Claire, happy to be here.
Claire: I am really excited about this episode because for the listeners little do they know we have been hanging out for the last couple of months working on a lot of fun artificial intelligence projects. But before we dive into your 30, 40 plus year in AI, why are you so In love with artificial intelligence.
Claire: You've dedicated most of your life to it.
Evan: I have, I have at first, again, thank you for having me here. I'm really happy to be here. You've had some really illustrious and exciting guests, so I'm hoping I don't drag the curve down too much. I'll try to, I'll try to keep up. But no, I'm, I'm a big, I'm a big fan, as you know.
Evan: Yeah, I mean, we're seeing these days that it's impactful right better and worse. I think mostly better. We have to keep our eye out for the worst. and that was the promise even, 30 some years ago when I first [00:02:00] got attracted to it. And I have to say, probably my first, Idea of it wasn't sort of practical what we can do with it You know how it can solve problems in health care business and so forth it was more in the intellectual level almost a psychological thing I was a sci fi, you know sci fi fan sci fi reader.
Evan: and there were a few stories about Artificial intelligences and sometimes they were evil, right? They were rogue, you know, rise of the machines and dangerous, but There was one passage. In 2001 Space Odyssey.
Evan: Right. Which was, you know, yeah. Huge movie back in the sixties when I was a kid. And and there was a sequel called 2010. And anyway, in 2010 there was a scene where the scientist, Dr. Chandra is having this, conversation with the AI called How. and it was a friendship.
Evan: it wasn't the programmer trying to program a machine. It was a friendship between two intelligent beings. And that really struck me that, you know, is this [00:03:00] possible? could humans create sentient machines right that can think that can maybe feel that can speak with you and you know, at that time, when I was reading this or watching the movie, It was a long shot. Then it was a bit of a pipe dream. But hey, Aren't we seeing it today a little bit, at least glimpses of it.
Claire: And it's interesting because people like you, I like having these conversations because it's an unpopular opinion. What you just shared. Yeah. People don't want to build these human connections with robotics and with these machines, but you find it interesting, which is we're going to dive more into that, but I have
Evan: to say, even I have my doubts now.
Evan: I mean, I, I've seen some things and seen some projections of potential dangers and such that, give me pause as well. So I would say that, my views have matured. They're not as, golly gee, Pollyanna is in this cool as they used to be. I'm still thrilled by the possibilities, but I think I'm sober.
Evan: to the risks as well. And as you may know, we've discussed one of my key [00:04:00] mentors in AI actually sort of famously left his job because he's very concerned about, the risks AI poses and equally smart people sort of dismiss, his concern. So there's a real debate going on, right?
Claire: Yes. And this is where it will be fun to kind of go into that and, and see what is happening. But I think it's important for the listeners to hear from someone that's, you call the OGAI, you know when we think of artificial intelligence, AI today is chatty, we T to this. world today, right?
Claire: The mainstream audience, that's what we think. And, and I'm victim to that. I'm part of that. I too, you know, sci fi. Always curious, but couldn't really see that. Existing in a real world today. So it's wild in such short period of time. We see it coming to fruition, but you have been in the space for so long.
Claire: And
Evan: you keep emphasizing so long. You make me sound [00:05:00] old, my friend, but
Claire: well, so long as in AI, yeah.
Evan: Yeah, I'm just kidding. It's, it's good.
Claire: It's a mindset, but so long is actually kind of funny when you say that, because when we think in, you know, in artificial intelligent tech time, so long is like three months, you know, but so Let's go back, you made a shift, you went to Cornell and you made a shift to go into a more of a technological focus.
Claire: Is this because of this book, because of this idea, what made you go into continuing to focus in AI?
Evan: it's a good question. Yeah, kind of weird. There was a book my aunt gave me. And so I was already primed, I think, by the, by the sci fi stuff, by the fictional ideas. And And then I got a book that my aunt gave me for a birthday when around, I'm going to say 1981 or such.
Evan: And it was called it was by a fellow named Douglas Hofstadter was famous. It was a, you know, bestseller at the time. He was a cognitive scientist [00:06:00] at various places, including, I think, University of Indiana at the time. And. It was a real intellectual tour de force. It was and it was a geek book. Let's be honest, right?
Evan: It was it was he went he went deep into the music of of bach right and kind of the structure of that He went deep into escher mcs or the famous artist who drew those weird sort of self-referential drawings and paintings. And ell, Kurt Ell was a, was a logician who worked with Einstein at, at Princeton.
Evan: And not your, typical pop culture fair right. To talk about incompleteness proofs and mathematical logic. But it was a brilliant book. And one of the core themes was, how does pattern emerge? From complex systems from complex things and is intelligence.
Evan: However, we define it. Is that just a pattern that emerges from the interaction of lots of little pieces in a complex system and he really explored this. And so for me. To go from the sort of science fiction broad idea to, wow, here's a guy who's actually sort of showing me how you might [00:07:00] build it, or how you might construct it, or how it might emerge from, some kind of effort, and that combination of, can it be done, and sort of, Here's how some people are trying to do it.
Evan: I was hooked. I was hooked, Claire. I, as you referred here I was in the middle of, a very expensive education. Thank you. You know, to my parents I was fortunate where I was happily studying political science, economics foreign languages and heading to a certain sort of career, in that way, probably law school.
Evan: And I couldn't now. I couldn't continue because had this idea that I have to do this. And so I changed my major and that cost me an extra year of undergrad, which I had to pay for. Luckily, I won some scholarships and then obviously went to grad school, came to Toronto here in Canada to study among the best people, including Jeffrey Hinton, so called Godfather of AI.
Evan: And it's been a fascinating journey and it wasn't the most lucrative path. I mean, to be honest, I could have, left with Cornell undergrad degree, gone to wall street where a lot of my friends went, I'd be much wealthier now. but I always wanted to [00:08:00] follow the intellectual muse, the interest.
Evan: I mean, it's it's an ADHD thing, I guess, follow the shiny object. But, no regrets. it's been fascinating. And if I've used AI to go into other fields, I took a deep dive into biology for many years, still fascinates me again, the complexity of DNA and protein and RNA and how can it be used for health care and environmental remediation and so forth.
Evan: but again, even biotech these days is AI powered. So it's a great place to be right.
Claire: And it's interesting because you just said, you chase the intellectual muses, so to speak, and in the academic world, it's very focused on theoreticals and it's focused on, looking at deeper research and the past and whatnot, and artificial intelligence, as we know, and it's very future focused.
Claire: Focused where we're going, but was there a pivotal moment when you realize, okay, I'm doing a lot of academic research, I'm looking at the study between art and science and seeing, these complex [00:09:00] ideas and how we can apply them to everyday life. Was there a pivotal moment where you're like, I can see how we can now solve real problems in the real world from this high academic space?
Evan: Yeah, and it happened, , I think when I was in grad school and again, I was just very fortunate that Jeff Hinton let me into his group. You know, brilliant, brilliant, man. I mean, it's a whole nother podcast. Probably he's done a bunch. Obviously. But, number one, he's somebody's well educated, but it's but it's also about persistence, right?
Evan: And you and I talked about this and we're examples, right? It's it's when you have an idea and you don't let go, right? And even when the naysayers are, you know, and at the time he was pushing this I don't want to get too much into, kind of technical details, but he was pushing this whole deep learning neural network kind of, approached, massively distributed parallel computing when the dominant paradigm then for AI was so called symbolic and kind of logical approach, you know, where you explicitly lay out a bunch of [00:10:00] conditional clauses, and the expert systems of the time and so forth.
Evan: So in other words, he was pushing ahead in something. That the experts at the time were saying was wrong. fast forward 25, 30 years later, they're giving him the Turing Award, basically the Nobel Prize for computer science, because he was right, anyway, I was in this group.
Evan: And of course, he also attracted other brilliant, brilliant people. And so for me, it was a combination of, Jeff and his Intuitions about what's the right way to go to understand how the human brain works and how we might, replicate that with computers. There were mathematical deep, deep theoreticians, people like Radford Neal, just brilliant, a Bayesian analysis and so forth.
Evan: Probability statistics, sort of the thorough theoretical grounding, solid foundation for the stuff we were doing. And then there were smart people Applying this stuff, right? And there was another fellow in the group named Sid who, for example was using neural networks, AI, machine learning to pioneer ways to help people with [00:11:00] disabilities to overcome them.
Evan: his first PhD project was something that was called Glove Talk, and it was basically, you put on this, this glove, which was all wired up with sensors and stuff, it could tell where in 3D space you were and how your fingers are moving, and he used, he basically used a neural network, he taught the system to learn how to do sign language, right, and it, generate speech, so he would go, quote, You know, he'd move his hand.
Evan: It would be like, Hello. How are you? Right. So again, very early. And there's, lots more that's been done this area and others since then. But these were the kinds of things that I saw where. it wasn't just in the lab, this stuff can actually can actually do stuff in the world.
Evan: Likewise, there were people doing things with computer vision, using AI to recognize objects, recognize faces, which is now. Right. You know, it's everywhere. It's it's it's in your local mall. It's in the airports, right? And then I was fortunate to be contacted by people in industry, like a DuPont, for example, who said, Hey, you know, come and apply this stuff to what we do.
Evan: So protein design you know, chemistry, [00:12:00] biology. So, so early on, I had that good balance between the theoretical and the practical. And that's what really got me. You know, You know, during my, my postdoctoral fellowship to say, you know, I really respect academia. I really respect the depth, the theory, the foundations, but I want to build stuff.
Evan: I want to get stuff out there. I want to commercialize stuff and see it go. So I, I did my first startup. Yeah. And it sounded like, and I did my first startup. No, this is where we're ramping here. Okay. So anyone
Claire: listening might be like, wow. Okay. I feel overwhelmed, Dr. Stieg, you are very intellectual and really focused on these complex ideas and in deep learning with artificial intelligence.
Claire: The listener right now, including myself, we're thinking, okay, AI, we're going to [00:13:00] jump around here because we're both ADHD years. Where is it going?
Claire: How does this impact me? I'm running a small business. what is it that as an individual or as a small company or business, what are the things that we need to be?
Claire: Focusing on right now with this crazy movement that's happening so fast.
Evan: Yeah. I'm excited because I think the major thing about today's AI is that the barriers have come down aI was the purview. It was, it was, it was the, the, the, you know, hoarded private possession of only the very largest companies, the very largest government agencies.
Evan: I mean, people who know about Alan Turing or , saw that the movie about, you know, the guy, this was the British Ministry of Defense. You know, sort of U. S. Defense Department, IBM, oil companies, Goldman Sachs, only these very rich, powerful, larger organizations, even 15, 10 years ago, could really use AI.
Evan: and most of us consumers and citizens, Were passive users at best [00:14:00] victims at worst, right? Where our data was used, where we were fed things, right? I mean, Netflix has been using AI to do, to recommend which movies to watch for years. And it's, you know, it's harmless and they do pretty well. but we weren't users.
Evan: were the product, right? we were the viewer. And what's, what's part of the algorithm that was speeding up. We were, we were part of the algorithm. We were inputs, right? and as you show so well in what you do and what you teach is now it's in our hands.
Evan: it's empowering, right? the barriers have come down there's more people trained in how to use it. There's just more people doing it. better tools and more competition in the marketplace so that people can't charge, a million dollars to use an AI program.
Evan: Now they have to bring it down to 20 a month or less, right? And so now we're starting to see it. The pieces in place like having data, having goals that are clear about what you want, what you don't want, having an understanding of, which tools are best for which things and that's what you and I and others are trying to help people with.
Evan: But I think, it's an exciting time [00:15:00] because now, AI is like another, tool In your own pocket, in your own hand, in your own business.
Claire: We were having this conversation offline like, wait, wait, wait, let's save this for the podcast.
Claire: Like this is a good discussion. Is that this idea that for so many years, and we had touched on it earlier, is that we were essentially products or by [00:16:00] products of, you know, these bigger organizations that whether we were the inputs or. Part of the algorithm, we had very little control of what was being fed back to us because whether it was you know, as simple as a hashtag that we've like clicked on or something we scrolled or something we said in our phones picked up and I was interviewing another guest and it was interesting and I'm curious to hear your opinion of where we're going as, A culture of media, seeing that now that we can personalize our own channels, we can curate our own deliverables, meaning that, you know, someone can come on to your website, my website, and they can have a bot, a chat bot that just connects with.
Claire: The brain of Dr. Stieg, and we can privatize and customize the different learnings that people can engage with that. It's [00:17:00] almost this idea that we're our own media companies, all of our brands. Now we have way more accessibility to having the tools and to curate. Do we think there will ever be a time?
Claire: That, the big giants of, you know, like we, we look at like the Amazons and Netflix and whatnot. Will that become still always prevalent or do you see a world where individualization and our own media channels will take preference over, you know, the way we consume information?
Evan: That's a good question.
Evan: I think it'll be a mix and I think we're all working and playing and struggling to figure out what what the right mix is right. And I would draw there's sort of two extremes are, you know, on the one hand, yes, it's cool if I'm empowered to curate all my own. You know what I read, what I, what I listened to, what I watch whom I talk to.
Evan: And by the way, not just curate, but create, right? I mean, there's the AI now helps [00:18:00] me. write songs. Like I can, I can, in five minutes, you know, here's
Claire: that I have a problem with though, too. That freaks me out. It does. And
Evan: there's, and there's issues around intellectual
Claire: level of non AI intervention.
Evan: Yeah. But I mean, but it's, it's cool, right? Because I can, I can get on one of these things and, you know, kind of tell it. I want a song about this, and I want it to sound like Stevie Wonder 1975, etc. And it'll kind of do it, like, pretty well right? But so that's all good.
Evan: And again, I think it's empowering this idea of personal curation is interesting, but I think we can also be overwhelmed, right? You know, it's the old idea of having too many choices, right? And there's been, you know, articles and books written about Sort of some of the psychological problems in our western society of having having too much choice it can be a cognitive overload.
Evan: It can be almost stressful, right? There were stories about people from the east block right from former soviet union from east germany when they first walked into sort of western supermarkets and like wait, there's like 15 kinds of bread, like, you know, 80 kinds of [00:19:00] cereal. I don't know what to do.
Evan: Right. and that's kind of what it is now. I still feel like that today. and I didn't go, didn't grow up in the . Nice. No, no. Right. Yeah. Yeah. And, I know that there's, I love music. I play music. I listen to music, you know and I, I, I am certain that there are amazing artists.
Evan: That I will never see or never hear because I go to my old favorites, right? but I know that I'm missing out. And so I think there's a market and you're seeing it for companies, for influencers, for, curators to provide that for you.
Evan: You know, you can tune into someone who'll say, you know, if you like this kind of music, I've been. I'm going to give you this stuff you like, but I'm going to mix in, you know, like five new artists every week, check it out. Right. And that's good. And I think we're going to do that. And I think, I think the recommender says, we
Claire: already see that in our, you know, with the use of Spotify and exactly less that it does.
Claire: I mean, I think over time, like anything, you know, the Watson idea of like the more you feed it, the better, [00:20:00] you know, the outputs are, you know now we're talking a lot about, you know, curation and creating more and more and there's more option and whatnot. But then on the other side, we look at the other side of the spectrum in the working world.
Claire: We're now able to automate probably the most monotonous, boring parts of our jobs. the part that probably I would say a lot of people. Including myself, when I worked nine to five, was that idea like, Oh, the, you know, the drudgery of being like, Oh, I have to do that. Or, Oh, I have to plug it.
Claire: Now, if you know how to use the tools and you're open to learning the tools, that part is automated.
Evan: Yes.
Claire: Maybe, I don't know if I can answer this, but will this, should this improve employee happiness at work? Or have we just created. Another administrative tasks for us to, learn all these [00:21:00] other tools that for some people, does it just put more on our plate?
Claire: Now we can work faster. So if you can work faster, you should be able to accomplish more like, are we, you know, that's a problem,
Evan: right? Yeah. Yeah. The history of. automation and other, you know, quote, labor saving inventions is that you get a
Claire: revolution.
Evan: Yeah. You get a brief respite where the worker's like, Hey, you know, , this isn't quite right.
Evan: And then, and then like three months later, the manager kind of, figures out, Oh, now we can make, you work more hours or get more productivity out of you, pay you less per hour, and so there's, that race back and forth.
Evan: One of the major debates these days is between the people who say, I will. replace workers right will replace jobs and others saying more optimistically No, you know someone who uses ai may replace you but basically ai will enhance your job, right?
Evan: It will make your job better and it will create more jobs, right? And I think The honest truth. If we're fair, if we're honest is both, right?
Claire: Yeah,
Evan: there is job loss. There will be job loss and there will be [00:22:00] more jobs created, right? Just just like with, Henry Ford.
Evan: The people who made buggy whips for horse and carriage. They they went out of business. They lost their jobs. But yes, many more people were employed making automobiles, right? And I think that will continue, and it is continuing, and I think some of the experts are quoting data on that, but I do think when we get into the long term AI stuff, this will be interesting, is Will there come a point with so called, you know, artificial general intelligence or artificial superintelligence where it's not just the mundane tasks, Claire, that the AIs are doing, they're doing the creative stuff, the intellectual stuff.
Evan: Why do they need us? Right? And that's going to be an inflection point. A risk point for society that I don't think we're ready to deal with. I don't think that our policymakers and leaders are prepared for it. And I don't think most of us, regular folks are prepared for it. And I, don't think it's two years away or even five years away, but I wouldn't be shocked if it's less than 15 years away, [00:23:00]
Claire: I would agree. And I think that, you know, the last thing we want to do is everyone left listening to this episode and like who knows, just run away, run away. Cause I think like with anything, there's so much.
Claire: Interest in this topic, and so many human touch points at this stage that there is a lot of people, including ourselves that care about how this does roll out into everyday life and how we use and how we interact with it. But I do, think. That maybe we're not at the point where the whole funnel is dealt for us, but as we see what we call agents today instead of, the algorithms or the machine learning behind the scenes that we don't see if we are searching for a flight.
Claire: It will help find the cheapest flight to London, let's say, but I think we're moving the next stage and correct me if I'm wrong. I'm curious to see where the progression, the phases are going, but we're seeing it more of it. We want to buy the flight. It will now find the [00:24:00] fight flight, book the flight and suggest the next list of suggestions we need to do to prepare for the trip.
Claire: You know I'm just wondering, if you say we're not ready, how can we be ready for this world where we will be interacting more and more with robots? The connotation of being so sci fi ish. I know, I know.
Claire: But, but like the reality is we do. We're waking up talking to robots. We're asking, you know, and this is for anyone. Do you say, Hey, Google, what's the news? That is a robot. That is a, you know, it, that is, we are, and, and more and more, I was just reading the, you know, they were talking about another robot that can start to finish, clean up a room, tidy, get the dishes in the dishwasher.
Claire: Like. jobs will be performed now exclusively, and I'm not going to lie, I will have one of those. Oh yeah. There's a podcaster I listened to and he has a robot tennis player. [00:25:00] He had the hardest time finding people to just go play tennis with him that worked on his schedule. He has a crazy schedule.
Claire: I mean, it's happening. So where do you think? We need to be jumping in and having more of a personal judgment call on how it's going. Going to keep rolling out because I do agree. We are moving in a place where start to finish. It will be, there is a potential of little to no human interaction in something being performed.
Evan: Yeah. I think you have to differentiate between the personal level and the, you know, the societal level. Right. And, and, and I guess when I was talking a few minutes ago about the potential for large scale unemployment, which again, it's not here. It's not immediate, but it's a potential, but this
Claire: conversation is good to have, because if we don't. Then the decisions we're making today, it could get ahead of ourselves in a way that we don't have. And that's where [00:26:00] let's, let's bring it back to the audience here.
Claire: What do you do right now? What are the ways that we can be ethical in our approach? What are the ways we can be, you know, inclusive and making sure that we're addressing societal challenges and really making, creating greater impact and good social impact in our communities using it.
Evan: Yeah, sort of future proofing ourselves in our, in our, in our communities, right?
Evan: Resilience, right? That buzzword these days, but important. I think we have to recognize the ongoing need for the human touch, you know, and whether you want to take that at a kind of, you know, evolutionary biology level. That's how we evolved. Whether you want to look at it more of a spiritual level.
Evan: However, however you want to view it. We are humans. And let's don't lose that and we won't lose that. And so it becomes a conscious choice that are there certain things that you don't, want the machine to do for you or with you, right?
Evan: But I tell young people when I talk to them, don't forget the human aspects, the so called soft skills, Because I think frankly all the more that you can have an AI [00:27:00] bot Write your computer code for you, right? You know write your business proposal for you and so forth You know all the more reason why the soft skills Can you can you talk to a customer and listen to them understand their goals understand the requirements?
Evan: can you sell? Right. can you inspire, can you lead? So I think that people who dive so much into just being a technical expert as a way to race against the clock, as a way to race against the machines and keep your, keep your employability, I think you have to be aware because I think one of the differentiators, if I'm hiring certainly is yeah, good, you know, you can, you can, you can program in Python, you know, you can architect this loose.
Evan: And that's awesome. Can I put you in front of a customer? Can you write a compelling pitch, right? And even if, even if chat GPT drafts it for you first, do you double check it and make sure that it doesn't sound canned, that it's compelling, that it's personalized to whomever you're speaking to.
Evan: Right. So I don't think the human touch will go away. And that, may save us both. [00:28:00] And I mean that both in kind of the, personal sense of how do I make sure I'm still employable or that my children are employable
Claire: and
Evan: also in the societal sense, hopefully our leaders in industry and government, and so forth will keep in mind the need for, human scale things in our entertainment, in our communities with parks and with, infrastructure.
Claire: I agree. Let's just imagine it's just you and a world of robots, like no humans, like you have to ask yourself, and this is another deep like thinking question here, but like, what's the purpose of living if you're not building human connections?
Claire: That's we're designed. That's what we're driven by. And it's really interesting because I think fundamentally, we start looking at. The question of, well, what's our purpose, but more importantly, we're noticing that the use of artificial intelligence, we all can use it. You can get anything you want.
Claire: You can ask the question and we'll give you any answer. But I think for people that [00:29:00] are using it, would you agree that the importance of being curious is one of the most crucial skill sets to have when interacting with these machine learning tools and products?
Evan: Yes, absolutely. I think curiosity is one of the most important facets in any in anything, right?
Evan: It, people talk about lifetime learning, right? driven not by. just fear of being unemployed. Hopefully it's largely is curiosity. It's love, right? It's getting into something. It's excitement, right? and, you know, we all have to sometimes do things and learn things that aren't aren't thrilling us.
Evan: And that's that's part of life as well. It's part of being a grown up. But I think at its best, science Art they're both driven by curiosity, you know Scientists are all about at least for me. It's always been how does this work? why does this protein, you know fold up into this three day shape and how is it implicated in cancer?
Evan: Like I want to understand. Yeah, I want to so I want to solve things. I want to cure things But the first instinct is, wow, this is cool. How [00:30:00] does it work? Right. And the same thing, obviously with art, I mean you know I, I, one of my favorite painters is Caravaggio, right. You know, Italian Renaissance.
Evan: And, a lot of artists, he was driven. He had to do it. He had to paint. Right. And, he was clearly fascinated with light and dark, right. With the juxtaposition of light and dark. And I think it was, it's curiosity. It's, you know, what if the sun were coming in on, you know, St.
Evan: Peter and At this angle, and what would it look like on his skin? What would it look like on his clothing? What would it look like reflecting off the the shiny armor of the soldier, it's curiosity It's it's let me try this. Let me see what it looks like and and certainly with the ai tools today curiosity should propel you to to try them and to and to dive deeper with them.
Evan: And again, as I've seen you demonstrate, in your courses, it's a necessary thing because you shouldn't just accept the first answer. out of the so called mouth of these things, right? You know, you, you ask a question to one of these, these LLMs and, you know, sometimes it's really good right off the bat, but often it's like, what, you [00:31:00] know?
Evan: And so, you know, you have to wait a second. You have to prompt it and kind of reprompt it. Sort of directed, so it becomes more of a more of a dialogue, just like an interview with a human to elicit answers that are sensible and hopefully solve, solve your problem.
Claire: and I totally agree with that.
Claire: This, this sense of curiosity just drives the next question further. And the more those questions keep coming up, we start to see more. Holes or gaps in whether we're building something because then it becomes like we see when we look at like, let's just take a big institution that we can see is hugely impacted.
Claire: I don't know about you, but I can see it weirdly enough feeling in some capacity. It's. Shifting, crumbling, paradigm shift, the education system, when you think about it and, and I come back, you know, from being a high school teacher that there is a way to teach there, there's been a way [00:32:00] for so many years.
Claire: Decades and decades of this, you know, teachers in front of the classroom and they're giving sharing the knowledge and information and encouraging discussion around a topic, but ultimately the teacher is leading the discussion. Well, now there's been just from an educational point of view, there's been a huge shift where the knowledge.
Claire: Is now of equal to the teacher experience, that's another thing teacher has more experience, but it's got to shift the way we're, we're, we're teaching and we're learning because I have the same access to knowledge as you and anyone else. And it's changing the way that we're learning. When you were in school and you look at today, what is.
Claire: What is the way that we navigate this new way to educate Dr.
Evan: Steve? as you know, I used to teach as well and I loved it. I didn't love sometimes the bureaucracy [00:33:00] around, you know, the administration and so forth, but I, you know, I love teaching. I love especially a smaller class. Right. You know, that look in the eyes when the student gets it right and you know that that satisfaction you know, there's nothing quite like it.
Evan: people talk about a generation or so ago, oh, we have calculators. Now you know, how do we teach math? Right? How do we teach you know, Right. Because the kids are just going to, click. Right. And now calculators when they came out.
Evan: Yeah. Right. and now it's like, you know, Why even assign a problem set or an essay because we know they're going to go to chat GPT to do it right. that really fascinating kind of workshop that you had last year, downtown Kingston right where Kevin de Lucio, the dean of Queens engineering.
Evan: You know, he and a couple other folks address that issue as educators, right? He gave a fascinating answer. I thought it was, I'm hoping that the way our professors will handle things now is, you know, not so much to test, you know, do you know this formula? You know, can you can you solve this equation?
Evan: That's a given that, you know, the machines are going to do that anyway, right? [00:34:00] So now it's more like I want to see people give assignments that are more like. tell me the multi step process of how you would solve global warming, right? I mean, down to like, what stakeholders to bring in what kinds of engineering need to be applied?
Evan: What are the key technologies? What are the costs? In other words, to think at this bigger level at this bigger because we
Claire: can't
Evan: Right. Not the micro level of, you know, how do I solve this? You know, but more like, wow, you engineers, you are future. You're going to save our world or destroy it.
Evan: Right. so think big. I'm being silly here, but, you know, how would you build a bridge from New York? to, England, right? Like, what steps would you take? What are the challenges, right? And let the machines work out the fine details, right?
Evan: Yes. You know, concrete structures and, static stresses on iron and all that stuff, right?
Claire: Yeah, and I love that. that is what it is. I can't help it because I think I'm like you, I think so optimistically about what technology can do, whether it's accessibility, like we live in a multimodal [00:35:00] world where you can communicate in any way.
Claire: Any medium for so many decades and years, it was so text heavy, it's talking and writing and now it can be, talk in pictures and images and, and video format and it can be integrating multimodal.
Claire: Maybe it's finally an opportunity to have more accessible learning environments that are not catered to the standardized testing of one student. Here's the test. Here's a multiple choice. Okay. You failed. So you're not that smart, you know funny enough.
Claire: This is so super tangent. I took a cognitive test. I love applying for random jobs just to seed the process. . I just find it entertaining. Well, I couldn't even move past the first stage of the test because I failed the cognitive part 32%. It was all these like questions and then I was thinking, I'm like, this is what AI could solve.
Claire: Yes. Because it wasn't, the big. [00:36:00] Problem solving and I think even at these like job application levels, and maybe I'm like, but hurt here that I failed the cognitive testing and couldn't even apply for the job. But I think even at that level is that. What are we teaching or what are we assessing?
Claire: Like, we've been assessing this, like, is it A, B, C, D or E? And like, the real world, correct me if I'm wrong, there isn't an A, B, C, D, E. There's a all of above, find the best one that suits this, you know? And so, you know, if we're building a future, Still, of choose A, B, C, D, or E, hypothetically speaking, how are we setting anyone up for success when we're not Solve the big problem the best you can use the tools.
Evan: I feel you because, again, we've talked about our ADHD and so forth and, you know, the way I look at it, my mother used to say there were a million different ways to be smart, right? there's people who are very [00:37:00] verbal.
Evan: There's people who are very mathematical. There's people who are very right. Hands on practical. Right. You know, there's just I knew, I knew a girl growing up who was, you know, quote, terrible in school and so forth. She had this ability with animals, like the horse whisperer, , it just was amazing. This communicative ability. And it turns out that's what she does for a living. And she's great at it and she loves it. And I'm gonna let our audience in on a secret here. Only a few of my friends know, but I think it's interesting about sort of the failure of the assessment system, the testing systems in, in education.
Evan: So long story short, I was accepted into Cornell University, you know, one of the top schools in the world as an undergrad with the proviso with, I was told verbally you are accepted into the College of Arts and Sciences at Cornell. As long as you do not go into mathematics or science, because my verbal SAT score was like really high, kind of off the charts, My math score was below the average for Cornell undergrads. And guess what? I ended up with a [00:38:00] degree in mathematics. With an A average with the highest honors. I won a scholarship again. It's not look how smart I am. Whatever. It was a lot of I agree it's all good effort. It's hard work. It's it's dedication.
Evan: It's focused It's doing what you have to do But it also tells me by the way that our testing systems kind of suck And that be very careful to dismiss anybody Because they quote failed something or didn't get a certain grade or didn't get a certain score because you never and by the way, same for yourself and same for your children, you know, don't accept it.
Evan: If someone says, you know, Johnny or Jane, you know, can't do well in school. No, no, talk to the teacher, get a better teacher, get a better school. We all have immense capabilities that can be brought out with the right, Facilitation. I don't even call it teaching. It's facilitation. it's inspiration, right?
Evan: It's, leading. It's, it's helping you learn. Right. Yeah. People are naturally curious. Claire, you have, you have a young child at home. One of the things that fascinates me about Little kids, [00:39:00] right? When they go through the stage of, why are there clouds and then you answer that and they go, well, why, why this?
Evan: And why is the sky blue? And why does it have to rain at all? And, you know, and why is there lightning? And it's this natural curiosity, this natural intellect, right? Which unfortunately, sometimes the school will beat out of. Yeah, it
Claire: does fade. And I think, I mean, right. And, and to come full circle, you know, how does AI play a role in the education?
Claire: I'm curious to think, or curious to know from you, do you think this is finally an opportunity to, at the very least, build a proper diagnostic tool that, you know, I always think. Any Harry Potter fans out there, but the, the sorting hat, you know, you're like, it's not about you're a level one or you're, failing.
Claire: you're in Gryffindor or you're in Slytherin. Like you are, you belong to a certain. Type of people or the divergent series. Another great series. I love the idea [00:40:00] where they divide population based on skill sets and temperaments. And I think, you know, it's written about, but we don't see it happening in every day.
Claire: And I think maybe artificial intelligence, now that we have these complex, we have so much data. It can now. Create these patterns and build out like here is the perfect because remember talking arguing someone being like, I do believe that we can start personalizing education like, and they're like, well, teachers have enough on their plate.
Claire: And I said, no, but I'm hoping that this now creates more personalization. And accountability on the student in, in some capacity, but the idea that the education is the foundation of our future the students that are here today, what are they learning so that they can solve the real big problems.
Claire: So with a final thought, with a few, like wrapping up this episode, dr. Stieg, if we look into a optimistic forward looking view into the future, what would you say if [00:41:00] they didn't take anything from like this whole episode is just like, where are they going with this? What are they trying to say? I think we've brought up a lot of, you know, things to, to question and to look at and to be curious and all these things, but as an individual, as an organization moving into the future.
Claire: With a very AI driven future, so to speak how does one balance this beauty of innovation, ethical implications, accessibility, and making sure we keep that human centric touch into the world. What would you tell these individuals moving forward into the future now that you've seen decades of AI?
Evan: Well, let's start small and then we'll crescendo out big. Okay.
Claire: Great musical reference
Claire: at the micro level. This accessibility of AI, this, barriers coming down in terms of cost, usability, and [00:42:00] so forth there are organizations, there are people for whom even just a dashboard, right? Even, not even code intelligence, not even AI, just getting certain data. About let's say what's going on in the organization.
Claire: What's their sales history, right? how many dollars of business did my little company do in Belleville, Kingston and Brockville last year? Are there any trends? Are there any seasonal trends, right? Are there reasons why Brockville is underperforming compared to Belleville, right?
Claire: Or vice versa, right? even a dashboard that isn't quote smart. I think can surface you know, certain facts that obviously an owner or manager wants to know. And that, and that's crucial. And I've seen this and I've helped, I've helped people with this. So that's really cool. Just even knowing what's going on is a big step.
Claire: As opposed
Claire: to without AI is manually going through pages and pages and pages of data and trying to not to. But like a, a very biased perspective of the data and trying to come up with some sort of [00:43:00] conclusion. That's what I'm just saying. I just want to for those of us listening is what the transformation having a, you know, before we have these, all this data and you're like, what do I do with it?
Claire: And now you're saying we see the interface or we see a page of just like, Hey, here are the results.
Evan: Exactly. And so again, it's a step by step. You crawl, then you walk, then you run, right? And so number one, sometimes just getting the data and just putting it in a dashboard, a visualization, just seeing it, that's step one.
Evan: And by the way, the human brain is great at this and
Evan: the cool thing is that this can now be scaled because, you know, you can maybe memorize 30 customers or even 300 customers, but can you memorize the. 3, 000 customers. That becomes tricky. So this is where the data, the analytics, the AI can help. So step one, just have the data, have a dashboard. Step two, you know, to your point, what if it could be made a little bit more automatic?
Evan: What if instead of having to look for something or sort of have someone program the dashboard to tell you [00:44:00] something, what if there were these alerts that came up and say, Hey, this festival is happening in, in this part of Eastern Ontario. I think this is a great selling opportunity, right?
Evan: You know, heads up June 12th, check it out, right? You know have this product ready, right?
Claire: It's triggered to align with your interests that have been
Evan: Exactly. So it's, it's demand forecasting. I think I told you this, Claire, there was a talk that someone gave, I think it was someone from IBM, actually, about personalization and you and I sometimes think about personalization, naturally enough, you know, sort of a personalized email, a personalized message, personalized, you know mailing, if you're doing, you know marketing mark com outreach.
Evan: Fundraising. That's all good. But but this in this talk, they talked about personalization is everything. It's it's not just having the right message. It's having the right product that the person wants. It's having it at the right time. It's having it at the right price. So personalization for a company is or for nonprofit organization.
Evan: It's really a whole [00:45:00] philosophy. It's a whole approach. And commitment, right? And I can certainly help this, right? you know, you go from the static data and dashboard to automatic alerts to you know, prediction, right? Predictive AI which is a big thing. That was what most of AI was up until, you know, the generative AI stage of about three, four years ago.
Evan: And now you have generative where it's not just, oh, I've got the alert. I should write the message. Now it's like, oh, I don't have to write the message. The AI will write the message. The AI will write a thousand personalized messages for all my customers based on a segmentation of the customer data by demographics, by geography, by previous responses, by language, right?
Evan: so that's, the dream. And again, You don't have to start big the way that, you know, IBM or, or, or ExxonMobil is doing it as you've shown again with, your courses, getting five or six, skills with a LLM model under your belt that can facilitate huge time savings for a busy, a business owner, a professional.
Evan: I
Claire: [00:46:00] guarantee five hours a week, you will save.
Evan: Yeah. Yeah.
Claire: Yeah. and I think that's great. And this is a great way to, I hope that the listeners have enjoyed listening to the brilliant Dr. Stieg talk about all these fascinating elements of artificial intelligence and also the human centric movement of, Humans and where we are going with building communities and building new products to include everyone.
Claire: you're booking a lot of things on the road. I know you are leading, going into 2025 into Bali keynote speaking on topics of a lot around. Security and privatize Do you want to speak on some of the big topics that if people that are listening, thinking, oh, my goodness, I need to have Dr.
Claire: Steve come in and build out a product or come speak to our organization. [00:47:00] Can you share with the audience what you love to talk about? Sure.
Evan: Yeah again, for me, it's about this, the barriers coming down, democratization of AI, right? don't just be a victim of AI, be an owner, be a user.
Evan: to do this right on a scalable way, right, which, you know, most companies want to grow at some point, grow their revenues or grow their customer base or just grow the degree of automation so they can spend more time on the golf course or with their kids.
Evan: at some point. You're going to have to connect to these AI systems to your own data. Right? And, it's a topic I do a lot about, you know, kind of so called ethical AI privacy, non bias, security interpretability, transparency. these are some of the elements of ethical AI.
Evan: It means doing things right ethics is such a fancy word for doing things, right? It means doing things, right? It means doing things illegally. It means, you know, avoiding regulatory traps, right? It means being a reasonable good member of our community And so anyway, the details of how to do ethical AI, how to, for example, make sure that [00:48:00] when you're talking to an LLM, your personal confidential proprietary data is not just going to the cloud or going to whoever is going to sell it next. That can be challenging. How do you connect something? How do you how do you have a hosted system, right?
Evan: Your own hosted private database either on premises or in the cloud secure cloud. How do you connect it to the AI systems? It's all very doable now. It's again. This is something that if it were five years ago i'd have to say Okay, you know, have you got five hundred thousand dollars to spare right? I can set something up for you Now it's more like, this is affordable.
Evan: This is something that can be done. There's tools out there. So I just found out I'm going to be going to Bali, Indonesia in December, which is a, I love Canada, but it's a great time to leave. It's okay to say you want to
Claire: go. To a tropical part of the world.
Claire: I am very envious. But I hope I am so excited for you to start speaking around the world on these topics and changing, you know, the world. I love the idea of ethical AI. And [00:49:00] for anyone that is Knows that this is very important to their organization, to their business and to the growth of their company in the show notes.
Claire: You'll be able to check out Dr. Stieg, his website, his work, and how you can connect with him and book him going into 2025, I can't believe it's not even that far from now, even though we're talking April, but it's crazy how much is happening. So Dr. Stieg. You are offline online. It's so fun to connect with you.
Claire: I've learned so much. we'll have you back on. We will, I'm sure so much has changed in, you know, we say the three month is like 10 years in AI, but thanks for joining this conversation the small town entrepreneur. And I hope that We just keep discussing these things that are happening and keep the human side and the human element to the technology that's being built
Evan: for sure.
Evan: It's an exciting time. Great to be here. Thank you again.
[00:50:00]