The Developing Life Podcast
The Developing Life Podcast is your go-to destination for creative minds, entrepreneurs, and leaders striving to grow and thrive in today’s ever-evolving world. A collaborative effort lead by Davron Bowman, Heather Crank and Tru Adams- each episode dives deep into the intersection of creativity, community, and strategy, offering actionable insights and inspiring stories from industry experts, visionaries, and innovators.
🎙️ What You’ll Gain:
- Proven strategies to elevate your creative and professional journey
- Insights into building community and fostering collaboration
- Practical advice for turning passion into purpose and profit
- Real stories of overcoming challenges, scaling success, and staying inspired
From navigating the complexities of running a business to exploring the transformative power of human connection in the age of AI, The Developing Life brings you honest conversations, thought-provoking ideas, and the tools you need to unlock your full potential.
🔑 Who It’s For:
- Creatives seeking clarity and growth
- Entrepreneurs looking for actionable business strategies
- Community leaders and collaborators who value connection
- Anyone passionate about blending creativity, commerce, and purpose
Join us and discover how to build a life—and a career—that inspires, connects, and creates lasting impact.
The Developing Life Podcast
Examining AI Through The Human Lens with Helen and Dave Edwards
How do we ensure AI enhances rather than replaces human creativity? In episode 22 of The Developing Life Podcast, Technology executives and founders of Artificiality- Helen and Dave Edwards share their vision of a future where AI and humanity work hand-in-hand.
Explore their strategies for navigating the ethical challenges of AI, maximizing business growth, and embracing the future of tech innovation.
----------------------------------------------------------------------------------------
In this conversation, technology executives Helen and Dave Edwards delve into the intricate dynamics of artificial intelligence and its profound impact on the future of work, creativity, and society.
Drawing from their extensive experience, they discuss the critical need to balance AI innovation with a human-centric approach, emphasizing that while AI offers unparalleled opportunities, it cannot replace the unique value of human creativity, empathy, and judgment.
They explore how AI learns from historical data but struggles with truly novel, unpredictable situations—areas where human ingenuity remains indispensable.
Helen and Dave also examine the ethical challenges surrounding AI's rapid adoption, particularly the tension between automation and augmentation, and how this impacts both the creative industries and the broader workforce.
The Edwardses advocate for a thoughtful, responsible approach to AI development that prioritizes human well-being. They discuss the importance of creating a hopeful future with AI, rather than succumbing to fears of widespread job displacement or utopian promises.
By examining the intersection of cognitive science, design, and AI, they call for a shift in focus from merely replacing human tasks to enhancing human capabilities. Their insights challenge the mainstream AI narrative, urging society to imagine and create a future where AI supports human flourishing rather than undermining it.
----------------------------------------------------------------------------------------
Suggest topics, guests, show your love or tell us how we can improve!
LEARN MORE ABOUT THE DEVELOPING TEAM AND CONTINUE THE CREATIVE CONVERSATION
- LINKEDIN: www.linkedin.com/groups/14310677/
- WEBSITE: thedevelopinglife.com/
FOLLOW TEAM MEMBERS
- Heather Crank | crahmanti.com/ | www.linkedin.com/in/heather-crank-crahmanti/
- Tru Adams | truatart.com/ | www.linkedin.com/in/tru-adams/
- Davron Bowman | thedevelopinglife.com/ | www.linkedin.com/in/davron-bowman/
---------------------------------------------------------------------------------------------------------------------
#Design #Creativity #Technology #AIinDesign #HumanConnection #CreativeBusiness #HumanStories #Podcast
00:00:00:00 - 00:00:14:18
Unknown
Welcome to episode 22 of the Developing Life podcast. We are diving deeper and deeper into the possibilities and complexities of artificial intelligence, encountering more and more opportunities and an equal amount of challenges along the way.
00:00:14:20 - 00:00:50:08
Unknown
But before we wind up way over our heads, how do we make sure that the technology does not swallow up the human experience? Today, we are excited to have with us technology executives and husband and wife duo Dave and Helen Edwards. Helen and Dave have been working together since 2009, stepping into the AI space in 2012. Since founding their company Artificiality in 2019, they have dedicated their decades of experience and expertise to exploring the intersection of AI with cognitive science, complexity theory, philosophy, and design.
00:00:50:10 - 00:01:21:20
Unknown
Dave and Helen are not only seasoned professionals, but are thought leaders who challenge the mainstream narratives of AI, striving to balance excitement about its potential while critically being aware of its limitations. With our host head to crank, we'll take a deeper look into how Helen and Dave combined the sciences and humanities to help humans make sense of AI roles in our lives, ensuring that these advanced technologies are developed and implemented in ways that truly benefit humanity.
00:01:21:22 - 00:01:23:01
Unknown
Thank you to
00:01:23:01 - 00:01:42:11
Unknown
Helen and Dave. Welcome so much. I'm so glad that you are joining us on the Developing Live podcast. I met you a few weeks ago, and, I am thrilled to have you here. Can you please introduce yourself and let us know anything you'd like us to know? Before I dig into a bunch of questions?
00:01:42:13 - 00:02:02:03
Unknown
Sure. Thanks for having us. I'm Dave. I'm Helen, and we're happy to be here. it was a beautiful introduction, so I'm happy to dig into the other things, if you'd like, but, we're excited to be here and talk about all those things. Okay, well, let's get right into it. so I've met you here in bend, Oregon.
00:02:02:05 - 00:02:26:12
Unknown
few weeks ago. How did you guys end up coming to bend knee high? It was you. It was me. It was, It was because of mountain biking. so we came to the mountain biking, and we kind of stayed for the skiing, and the general outdoors in us. but that was the main thing. The bay, the Bay area was kind of doing my head in from a mountain bike perspective.
00:02:26:12 - 00:02:47:24
Unknown
And the community up here is just fantastic. And the trails are awesome. So that that was why we came. But we built a life here and that's a great place to live. I agree. so if you came to bend, my next question for you is how did you start working together? How did you meet and begin your journey?
00:02:48:01 - 00:03:11:01
Unknown
into I, I know you have separate tasks, like how did you how did you, fuze your mission? That's a good question. and so both of us came into, you know, we made it. We met in a work environment. We met, with a sort of a, a shared interest in technology. And at the time, it was in clean tech.
00:03:11:03 - 00:03:44:22
Unknown
and, my career had been in the in the power grid and in the new technologies on the power grid. and Dave's was all over the place that last, you know, sort of these other technologies that hang on to the power grid. So solar and wind and what have you. and we, we we sort of we got into that because there was sort of an obvious place to start building, more expertise and helping people think through what's going to happen as the grid is decarbonized.
00:03:44:24 - 00:04:07:18
Unknown
But as we got into that work, what really ended up biting us was the, you know, as in getting bitten by the, the bag of of the underlying changes in the software that needed to happen to make a truly smart grid. And by the time we got to that point, we were like, oh, it's all about AI, because this is the much more interesting thing for us.
00:04:07:20 - 00:04:42:01
Unknown
so that's that was how we got into it was through working. I did some work for an AI company in the Valley that was trying to apply the technology in the power grid. and we got really entranced by how artificial intelligence was going to change, how we how we interact with the world. And the thing that the, the sort of first stepping stone that took us into that more broad application of of I was hearing a couple of kids who were about to go to college and saying, well, what do we study in college?
00:04:42:01 - 00:05:02:08
Unknown
Just the robot to get the job. We were like, well, the robots aren't going to tackle the jobs. That's that's a crazy idea. but let's figure this out. What's the right way to think about a career in human learning and human creativity and human flourishing? Is that's what a career is about. What do you want to learn?
00:05:02:09 - 00:05:24:19
Unknown
How do you want to apply that, and what's the way to think about doing that in conjunction with AI, so that you can leverage both strengths of humans and of machines? So we did a big study, in 2016 on that and came up with some really interesting conclusions that guided our kids, guide us and still are really relevant today.
00:05:24:21 - 00:05:58:23
Unknown
So what are those conclusions? The number one conclusion, and it's still relevant today, and you're still hear it talked about all the time, is that AI is it learns from historical data. So it can never handle truly unpredictable, truly novel situations. That's the job of a human. And we're learning a lot about what it is about biological intelligence that it allows us to do that, that improvization that, using affordances in the environment to jury, rig and move around spaces that are, completely unknown.
00:05:59:03 - 00:06:22:11
Unknown
They're not just combinations of things you've seen before. That's what I does, combinations of things we've seen before. That's why the large language models people say the creative, the creatively combining in ways that are beyond what a human mind can do. That's creative. Truly. It is definitely creative, whether it's any use and any value that's a different story.
00:06:22:11 - 00:06:51:20
Unknown
But it is creative. so that core nature of being able to handle unpredictable situations, then they were that's the overarching thing. There were sort of five key areas. They were dealing with environmental things, and hazards. So biological hazards, environmental hazards. That's a very you know, that's a requires human judgment. Spatial application, spatial reasoning. You know, using AI for this all the time.
00:06:51:22 - 00:07:18:07
Unknown
But spatial reasoning is, is something that's very important and very important to the way that we judge the value of something in space. So architecture things some of the what a lot of people like to just dump in the general category of soft skills, but, sort of reject that simple kind of classification things that are very, about social coaching and social negotiation.
00:07:18:09 - 00:07:47:24
Unknown
so it's not just about being a, you know, a therapist. It's about being someone who manages a whole lot of therapists. For example, manages more complex situations that are more difficult to handle. there were, the sort of handling math and finance, but making decisions where those judgments, have multiple outcomes. So some of these, yes, you can have an algorithm pick stocks for you or whatever.
00:07:48:02 - 00:08:19:19
Unknown
But if you really want to understand how to, you know, how do you how do you make sort of these more deeper, financial judgments and handling the, the, the ambiguity that comes with mathematical analysis next, obviously in my G area. So those were kind of the key. Yeah. And it's the way we got to it is was interesting because we started by looking at, the, the question of the time was kind of how much automation would impact employment.
00:08:19:24 - 00:08:47:00
Unknown
Right. And there was, a well covered study from Oxford, and another one from McKinsey. And these folks all came out with sort of their projections of how many jobs would be impacted by automation. And, it made a lot of noise in sort of that 2015, 2016 timeframe because there were quoted as, you know, 40% of jobs are going to be replaced, which was actually an inaccurate assessment of the the research.
00:08:47:00 - 00:09:07:00
Unknown
They did. They actually that's not what they said, but that was generally where the coverage was. 747 right. But it was actually 47% of jobs would be impacted because 42 is the reason you know, this. Yeah. So if you ask him any question, he'll say you had to be 42. So but it but it was it was actually the 47% of jobs would be impacted.
00:09:07:00 - 00:09:34:03
Unknown
But we sat there and said, well, we weren't quite convinced that had, the studies were done and so forth. So we did it wrong. and so we started by looking at the entire, you know, database of the US Bureau of Labor Statistics. We used our own projections of all of the different, types of tasks and skills and capabilities that I could had at the time and how we projected how that would be developed over time and apply that against this huge database of all jobs, everything that everybody does at work.
00:09:34:05 - 00:09:50:06
Unknown
And we came up with our own projection, which was, similar but different. But we were looking at it as like, what's the opportunity for actually, sort of amplifying humans at work? And what and how would you think about it as an investor? but we took it and we said, well, hold on a second. Let's not look at what what where the automation is.
00:09:50:06 - 00:10:12:00
Unknown
We're curious because our kids were saying, what do we go study in college? Let's turn this upside down. Where are the jobs that actually and the fields of expertise that have the least capability to be automated? that's a really interesting sort of inverse of the project. And so, we ended up, you know, and I was constantly crashing Excel because I was only doing.
00:10:12:02 - 00:10:33:02
Unknown
But, that's how we came up with these, sort of these, these answers. And so it was a very ground up looking at exactly at how jobs are reported. There's there's strengths and weaknesses of that database. Right. And it's all about reporting about what you do. And so there's there's some gaps there. But it it had some strong grounding in, in data and facts that we that we came up with those conclusions.
00:10:33:04 - 00:11:00:02
Unknown
Wow. You are super prepared. I'm really impressed you your children are set up for success. So riffing off that. So you reverse engineering careers that could be interrupted and careers that may flourish. The creative industry right now is in a huge, disruptive period. I'm being affected by that. I know a lot of people are being affected by that.
00:11:00:04 - 00:11:28:17
Unknown
How do you feel about the current level of disruption that is happening, and sort of the AI rhetoric and hype cycle? And it's either going to destroy us or it's the best thing of our work. How do you feel about this? Or your attack? I think that we're I yeah, I think that at the moment I feel like we're in sort of the what I hope would be the worst part of the cycle for, the sort of anxiety for the creative industries.
00:11:28:17 - 00:11:55:17
Unknown
Right. Because there's this, there's an expectation, a projection of these of incredible capabilities from AI and creativity and there's some that are there. but really the fear is about sort of the what will these tools do? And so, you know, I can't I doesn't feel like a day goes by without LinkedIn, somebody putting up a little generative AI video and saying day and hours.
00:11:55:19 - 00:12:26:09
Unknown
Yeah. And, and and this is this is going to replace everyone in Hollywood. And it's only as you stop with a small amount of critical thinking, you'll recognize that. Guess what? I'm sure there's a lot of, you know, repetitive content that can come out that might be able to be replaced based on previous data. But if you stop and watch any TV show, that seems a bit to predictable that it's not a very good, production and that's, you know, anything that's as how I'm saying anything that's novel, anything that's new.
00:12:26:11 - 00:12:44:04
Unknown
Right? Is something that is anything that deals with unpredictability is a truly human skill set that we have and we will have over and above AI for quite some time, if not forever, for as long as we keep doubling down on Transformers. Exactly. It. We're it. Something will break us out of this eventually, but we don't see what that is.
00:12:44:04 - 00:13:03:05
Unknown
But the the fear is justified. The fear is warranted. I mean, I get why people are concerned. I live through this a little bit. 20 years ago, or even more now that I have to, I have to remember to date myself to more than 25 years ago, I was a first product manager for, tool called Final Cut Pro.
00:13:03:08 - 00:13:28:06
Unknown
And when final Cut Pro came out, I vividly remember we took the demos down to Hollywood, were going around and and we were the professional editors at the time who had to be, it was a certification or a licensing or something to be able to run an avid booth, you know, they thought that we were going to kill the editing profession because it would be, you know, anybody could do it and it would be too cheap and would take all the money out of it.
00:13:28:08 - 00:13:54:17
Unknown
Well, now you have millions of people using that tool, millions of people using premiere, the avid, you know, editing bay that you had to rent by the hour is gone. And now lots of content is made with these tools. But the fear upfront was very real and material. People worried about their jobs that suddenly somebody else is going to be able to do this for cheaper, and that's going to put me out of a job that is a constant, you know, reframe.
00:13:54:22 - 00:14:16:13
Unknown
Now, I don't think we are the techno optimists who say, well, guess what? We'll always find something better to do, right? there is patterns that new technologies have created, new fields, and there's disruption that's painful. But other jobs emerge. This is different, however, in that this is a different kind of technology. We've never made something that has a level of intelligence in it.
00:14:16:13 - 00:14:39:23
Unknown
We've never made anything that has the at least the potential to be able to make decisions and take actions on its own. And, we've never had this capability of this mass creation that happens within the software itself, not at the instruction through command based interfaces that we've built over the last 40 years. So we don't really know where this is going to go.
00:14:40:00 - 00:15:08:07
Unknown
I still think that, you know, rolling the dice, my bet is that it's going to be more enabling for creativity than not. it's going to create a, a higher level of expectation of creative output. Right. but that will create, clearly disruption for certain people doing certain things. we struggle with ourselves. If you look at our website, artificiality dot world, you'll notice that we use, our our images are from Midjourney.
00:15:08:09 - 00:15:30:09
Unknown
And that is not displacing anyone. you know really I mean, prior, if this tool didn't exist, maybe we'd still be using Adobe Stock, which obviously does have a, you know, a revenue source behind it for photographers and creators, but we might not be using any images at all because we kind of get tired of stock images over the years.
00:15:30:11 - 00:15:51:09
Unknown
so we struggled with this question of what does it mean to use these tools? What does it mean in terms of the creativity that went into these tools? How do we think about the most responsible way to actually, you know, be creators ourselves? So, it's not an easy answer. we're constantly re rethinking what that means.
00:15:51:11 - 00:16:24:24
Unknown
he's usually more optimistic than me. That's generally how lives work through, in terms of the creative fields as opposed to creativity is a broad human endeavor. Yeah. Creative fields. I've there are a few little I feel a little pessimistic and not because of the technology at all. She feel quite positive about the technology. I feel really negative about the system in which the technology has been adopted.
00:16:25:01 - 00:16:47:03
Unknown
the, the, the bias for automation over augmentation. There's a lot of talk about how humans are so special and how we need the human touch. You will people want to watch things made by machines or read things but made by machines. The early data is, a little troubling. The early data is that is what a lot of people make.
00:16:47:03 - 00:17:27:07
Unknown
Kind of bad stuff, bad writing bad. So if you if you read it and it's written by a machine, you don't know it's written by machine. People tend to prefer it. That's kind of there's lots of consequences and outcomes from that. In the creative arts in particular. I think that there's a there's a real short term concern, which is that people and this is this has happened before that people will, that a decision makers that don't understand the depths of what's required to make a truly creative product will say, oh, we're got a machine to do this now.
00:17:27:07 - 00:18:05:20
Unknown
Yay! Let's just do that and that. And two years from now, they turn around, I go, oh, actually, that didn't really work, wasn't that great. But those people who had jobs and livelihoods have had to go and figure out something new. And that is actually the history of technology disruption that isn't talked about enough. it's, you know, ten years ago, people would say, oh, you're a lot of right, without actually understanding that the Luddites had a really good point, and they were fighting for their their livelihoods.
00:18:05:22 - 00:18:26:22
Unknown
And been a lot of AI is a term of derision now, like you're sort of you know, you don't want to use technology and you're kind of backward and maybe you're a bit dumb and uneducated and you're not willing to get with the program, but the reality is that that when we disrupt society this way, it's really bad for everyone.
00:18:26:24 - 00:19:09:17
Unknown
And these utopian stories, are a terrible stories because they're all based on getting rid of people. jobs and I haven't spoken to a single person who actually realistically wants to go and spend their life on the beach getting UBI that they don't believe in because they know that none of that is actually going to be how it rolls out, that the reality is one of more of the hard core economics, and the hard core economics is that the supply of creative content explodes to the point that demand, cannot keep up with supply.
00:19:09:17 - 00:19:36:12
Unknown
It's completely saturated. And in economic theory, what happens then is the price collapses. and we are seeing this, right? We're seeing the, the the the price of research is collapsing with the price of, of, you know, and so if you were an illustrator and you used to charge $20, what could you get? Can you charge $0.02, $0.20?
00:19:36:14 - 00:20:17:12
Unknown
and so, you know, this is well documented with music streaming. And I think that that's what makes me, optimistic about the tools in the technology, but pretty pessimistic about the societal system in which we're adopting it, which actually is what matters. Yeah, I guess I have one more thing, which is, not to be the only optimistic one in the conversation here, but just the sort of the questions that I have as we think about the creative industries, which is an industry we spend a lot of time thinking about, is, and if you look at some of the past, look at the whole, evolution of digital filmmaking, digital TV, special
00:20:17:12 - 00:20:36:11
Unknown
effects, early on there was clearly a concern about the effects industry, right? As soon as CGI started to become a thing is, is going to put everybody out of jobs. it didn't I mean, there's a huge CGI business because movies changed and they started. Everybody started adopting these, these things. Now people, skill sets and jobs were disrupted, for sure.
00:20:36:11 - 00:20:59:24
Unknown
Like, I'm not trying to put this out to be I'm not I'm not being, you know, trying to make this sound like there wasn't a challenge. And that also bled over and, and created essentially the entire gaming industry, which didn't exist and is now a massive industry. And employs a whole heck of a lot of people. Now we run the risk that there's some of this, some of somehow, you know, computer generated, you know, generative AI is going to displace even the gaming industry.
00:21:00:01 - 00:21:19:15
Unknown
So then there's this question of, well, where does human creativity go? Where does human desire go to something else happened that we don't know yet, because I don't think even though I was around at very early gaming stuff, right in the in the 90s, we were first the first technology I worked for Macromedia. One of the things we used people use director for was making games.
00:21:19:17 - 00:21:42:18
Unknown
would not have seen this. The explosion that's happened in the game industry, we couldn't have predicted it. so there's that sort of question for me is what happens when things shift and change and where does that go to? and there's sort of that broader than that is sort of this question of whether there's different forms, different sort of art forms that can emerge.
00:21:42:20 - 00:22:02:02
Unknown
what is the new way of making film that does things? And the last thing I'd say is whether there's economic shifts that happen in the overall creative industries. And I'm thinking a lot about film and TV at the moment. Right now, you spend gobs of money on these huge action films, and sometimes they work, sometimes they don't.
00:22:02:04 - 00:22:27:03
Unknown
But then you also have these very high beta, you know, small scale productions that are very human based. You're not going to remove, you know, the people that are in those small indie films that sometimes are a huge breakout success. Now, if you can make those action films cheaper because it's pretty easy to take Tony Stark and make him fly across the sky using a generative AI tool and not have to do all the crazy stuff there.
00:22:27:05 - 00:22:48:11
Unknown
I mean, maybe more money. Will they invest more in other projects that are much more human based, where they tend not to do as much of that today? Like, I think there could be some level of overall shift. I don't think our consumption or interest in in creativity is going to decline. I think you have almost an insatiable appetite for new entertainment.
00:22:48:11 - 00:23:10:20
Unknown
You are creativity, new art. So the demand is there. So the economics are there. So the overall question for the industry is if some level of cost is removed, does that cost get reapplied to something else. Because there's still the the human desire to consume. Now that's a lot of maybes and a lot of mights. And it's definitely the positive outcome.
00:23:10:22 - 00:23:32:12
Unknown
because I'm trying to think we, we try to imagine what the hopeful future is. We don't deny the risk factors and all the negativity, and all of the disaster scenarios. We've done a lot of writing about existential risk, no question. We get it. But we also try to imagine the positive future because our mind is that if we can't imagine, you know, if we can imagine it, we have a chance of creating it.
00:23:32:14 - 00:23:53:11
Unknown
Well, and this is, you know, the we always talk about it, the excitement to fear ratio because there's this bias you have both going on at the same time, excited but fearful of it, positive or negative. There's there's no it's not a simple this is great. And anyone who's it's all great is just deluded, flawed thinking, not rational.
00:23:53:13 - 00:24:34:21
Unknown
and it it what, what makes me optimistic overall is, that this is this is kind of an evolutionary event, right? We'll get through this. There'll be, there'll be new state spaces on the other side of this. There'll be, well, there'll be completely new ways of thinking about what creativity is. And, as we encode more and more of the human experience into these tools, there's more opportunity to see things that we've never been able to see before, to experience things we've never been able to see before.
00:24:35:00 - 00:25:07:11
Unknown
We haven't even started designing lives. We haven't even conceptualized what it might be like to use an AI to truly understand how someone else is feeling, or do, in a useful way. We haven't. I mean, you obviously there's the negative side to that as well, but to to enhance our own creative skills. So we this very we saw early in this, I think the short term disruption is, is really troublesome because it doesn't happen in a vacuum, right.
00:25:07:14 - 00:25:33:10
Unknown
Disrupt the whole creative industry and you and you move people in and out of jobs. You have to do skill. You have to reskill so quickly. There is a real cost for society. And, we don't talk about that enough. But longer term you can see that that all of these spaces that we consider to be creative. Now, there's no reason why they couldn't be bigger with the machine showing.
00:25:33:10 - 00:26:05:07
Unknown
It's helping us, as a cognitive crutch to see different things. I love all of that. I'm going to push back just a little bit. So right now there's a kind of, upheaval with a lot of artists. and I'm going to focus just for a minute on, Jen, I know I covers a huge area, where we've got sites like Kara where they're no longer putting their work up in spaces where they fill it, their images can be scraped.
00:26:05:09 - 00:26:32:21
Unknown
I'm seeing lawsuits starting to go through now. how do you feel about the way that AI is training and the ethics behind it? And how can we move forward in a more responsible manner? Because I feel like, which AI artists have really, had to bear the brunt of this disruption, disruption specifically so that other people could benefit because speed equals profit?
00:26:32:21 - 00:26:51:09
Unknown
I'm guessing that's part of the underlying methodology. So I'd love to hear your thoughts. Oh, I think this is the biggest value transfer that ever happened in human history. I think it I think it's just I think it's awful. Yeah. And and I throw in that it's in addition to artists, writers and we are right. Yes, yes.
00:26:51:09 - 00:27:14:19
Unknown
So everything we've ever written has been consumed. Right. you know, we have some amount of what's we currently do, which is behind a password paywall, but, I don't know whether that how much they get beyond those. Right. So, we have the same way we live that both sides of this. Right? So we're both having all of the work that we've done being having is consumed.
00:27:14:19 - 00:27:44:00
Unknown
Right. Has been consumed. not with our authority. not without anybody asking. we have copyrights written on everything, whether it was our own individual work or stuff that we've written. you know, large scale publications that we worked for in the past. Yeah. it's all copyrighted work. and it's all been consumed. And that's really troubling when I think there's a, there's a the I mean, the stance from the AI companies is not just that it's public domain and on the internet, but because that that's that's part of it.
00:27:44:00 - 00:28:14:24
Unknown
But the core of the stance as well, a machine learns like a human. So we're just using that stuff to learn. And we know that there's been enough cases now that, that, that doesn't really stand up for lots of different reasons. And, and I think that it goes back to the point I made before about the socio technical system that we're we're putting this into that, that people like OpenAI.
00:28:14:24 - 00:28:57:18
Unknown
I think it's okay to do this. Yeah. Actually sense that it's there's no sense there. There's no consciousness that this maybe was just a little bit riding roughshod, maybe wasn't quite in the spirit of maybe people are hurting because of this and, and I think that that, that lack of, of just sort of acknowledgment and awareness and this arms race forward, I think that's ultimately a huge mistake on the, on the part of these companies because it is naturally provoked an enormous and quite justifiable backlash.
00:28:57:20 - 00:29:33:01
Unknown
And this huge experiment that OpenAI launched on the world almost two years ago has biased everything towards large language and transformer style training. So we're not actually making the progress in terms of true AGI that we may have otherwise. If you want AGI, it's biased everything towards human replacement without a realistic promise of what's on the other side of that need taken people's work and livelihood and intellectual property and said, oh, well, you put it on the internet, so it's mine now, and I'm going to make money out of it.
00:29:33:03 - 00:30:07:01
Unknown
And it's created these inflated expectations about what a what a large language model can do. And I ask people to using it for terrible use cases, which is then polluting the internet with crap, you know, to just pollution. So I know I look at this overarching sort of socio technical system and go, that's the bit that's broken. and so there's no point fixing the technology per say.
00:30:07:01 - 00:30:33:13
Unknown
What we've actually got to do is really think about what the human components and the Institute options are. And that's why, even though some of our work looks a little bit kind of out there, it's really important for people to understand the natural tools that are driving the strategies behind these companies. the narratives drive the people, and the people hit the buttons, rise up machines doing it.
00:30:33:13 - 00:30:58:04
Unknown
Yeah. Although, you know, not that far. In the future, there'll be a gene to parts of this equation. We fix it before the origin tech part of this equation. Because if we just have ChatGPT with its full agency going into the world doing things without it, even being a, you know, a prompt from a person. God knows what crap it's going to come up with.
00:30:58:07 - 00:31:35:01
Unknown
And even though these models are incredibly useful and they're incredibly valuable and they're a massive step change forward and human understanding of, of of what we create with knowledge, this abstraction that we have called a large language model, there is a planetary scale intelligence that we us if we set that up on the world now with the current bias towards getting rid of people and making the top people rich like these predators, it's it's not a good that's not a good outcome.
00:31:35:03 - 00:31:55:07
Unknown
So that's why we're so the reason we're so into imagining a different future is because that's what humans do. The machines imagine we can't get to GPT two. Imagine a future. It'll just give us a reflection of the past, all mashed up and turned into a, you know, a sushi burrito. We need humans need to do the imagining.
00:31:55:07 - 00:32:18:15
Unknown
What can you space that we need? right. some people think that, saying we should imagine a better future with AI is kind of like, you know, unicorns and lollipops and rainbows. It's not. This is hard. We have to do this. It's most responsible thing you can do in AI at the moment is an alternative narrative than the one that's been pushed on us.
00:32:18:17 - 00:32:50:08
Unknown
Yeah. I think there's a, you know, the part of the challenges that we have established, some level of norms in human to human consumption of other people's work. Right. We had a fascinating conversation about on this general topic with a musician named Jonathan Colton on our podcast. And Jordan's interesting because he, he came after he comes at the topic of saying, look, you know, I when I write songs, I mash things up over, over all the things I've ever listened to.
00:32:50:12 - 00:33:09:00
Unknown
Right. You know, and, you know, I don't really know where. And I'm borrowing from other people and there's some levels of what the AI is doing, which is similar to what I do. Well, there's there's the argument starting AI companies now, but that's right. We are human beings. It's a biological people doing all of the updating of the model.
00:33:09:02 - 00:33:28:02
Unknown
It is, but also when it's a model owned by someone. Yes. But also there is a there is a distinction in that there are established norms and laws that support what he can do and can't do right now. Sometimes that works. But you got you got artists who come out and say, you've copied my work and there's some sort of lawsuit, and it's been a whole bunch of that going on.
00:33:28:02 - 00:33:47:09
Unknown
The music industry in particular over the last couple of decades. Sometimes it doesn't work. Jonathan himself got himself stuck in one of these, so he did a one of his top songs was, he actually created a, a, a version of a Grandmaster Flash in a rap song, but he created a whole melody to it, and it's this great little guitar thing, and it's hot.
00:33:47:13 - 00:34:11:07
Unknown
Right? And he got ripped off by the TV show Glee. Oh, they performed the song with his melody, which did not exist anywhere else in the world. But based on the laws and the way that song laws are established. He didn't own that melody because it was a cover of the original song, and so the most he could do is call them out on it.
00:34:11:07 - 00:34:34:13
Unknown
He has a huge fan base who made a huge but big stink out of it, and whatever. But sometimes the laws don't work. Even in the human human part of the collaboration. It's not there to support the artist and what we really think is true art and their true contribution, which is what Jonathan definitely had no, what we were moving into in a world is where we have zero norms and zero laws, and zero ideas are protections.
00:34:34:13 - 00:34:57:07
Unknown
When you deal with with a machine that does this, we have nothing. And so and I think the biggest problem right now is just this mass rush to adoption. But we haven't figured out what we culturally believe is an okay use of this tool or not. Yeah. So for reasons I give here. So we as I said, we worry we we grapple with what should we use as images.
00:34:57:09 - 00:35:15:09
Unknown
If we weren't using that tool we might not have any. So I sort of the logic is well I'm not sure we're yeah, we're using the tool which is still you can tell I am at the tone of my voice. I'm still struggling with it. Right, right. But let's say we're using we had another interesting interview on our podcast with Richard Carus, who's heads omnimedia for Nvidia.
00:35:15:11 - 00:35:33:12
Unknown
Yeah. Talking about Omnimedia, the fact that you could take an asset of a car and then move it from there was originally filmed in Big Sur and you could put it out on one island or something, right. Something. Yeah. they're using an asset that they own with film that they took. They're combining those things up. It's, you know, where is there?
00:35:33:14 - 00:35:51:21
Unknown
You know, there's some of these things that feel cleaner and a better use of the tool, even though it's a very different use of this sort of generative AI. But then there's other things that clearly feel really wrong. I go in and ask for, you know, art based on a specific artist and get a replication of that. That's clearly wrong.
00:35:51:23 - 00:36:13:10
Unknown
You will. You know this because we're working project together. You'll experience the rise at our upcoming Imagining Summit, where there's an artist who has had, used worked with an AI company to train an AI system on his art so that you will be able to co-create art in his style, and then you'll be able to take that art and go off and do things with it.
00:36:13:10 - 00:36:36:24
Unknown
You could go make a T-shirt with it and so forth. Right. That's also okay because he gave it the go ahead. Right. It's his idea. It's his art, it's his contribution. It's his work. Right. These are all very different use cases where the technology is fundamentally the same among all. But some of them feel really right I think universally to people.
00:36:36:24 - 00:36:56:01
Unknown
And then I imagine that Patrick's thing, I don't know why anyone would complain about an artist saying, I like this using AI with my art. That seems to be a good thing for him. But then there's plenty of others where universally we feel like it's not a good thing. And then there's this messy middle where we don't really know there's a lot of disagreement.
00:36:56:03 - 00:37:20:13
Unknown
And and to me, it's less it's not the technology is a bad. It's the how is it used is that is the problem. And the but we have this, you know, world of big tech extraction, you know, economy that is basically just rushing ahead and extracting as much as they can. So not having enough time to stop and think and figure out what might be a good use or not, you know.
00:37:20:13 - 00:37:36:14
Unknown
Yeah. Historical reflection we've got you know, I was around in the, you know, beginning of the internet days, you know, like to say the dawn of the commercial internet in 1995, there were 9 million of us online. It was a small community. It took a long it took a long time to grow, even though the dotcom bubble was fast.
00:37:36:14 - 00:37:56:24
Unknown
But there was a lot of time to kind of figure stuff out and see where culturally we created some sort of new norm. But now we have this technology that's available to billions of people overnight, basically because anyone who has a phone can access ChatGPT or Midjourney. Yeah. Others. We don't have time to sort of slowly figure this out.
00:37:57:02 - 00:38:25:02
Unknown
And so it's been thrust upon us in a, in a, in a grand petri dish. Okay. So taking all that information and I'd like to hear more about your company artificiality and the upcoming imagining some that you're going to have here in bend, Oregon, I'm assuming you're going to address some of these topics. You have an incredible lineup of some pretty heavy hitters.
00:38:25:04 - 00:38:58:11
Unknown
what do you hope to accomplish with your company and with your summit? Like, what's the purpose? Well, overall, the purpose of artificiality is we, we really spend all of our time trying to figure out the fundamental answer to the question of what does it mean to be human? And how is that changing with this increase of synthetic in our life, this sort of synthetic organic merge that's happening and that that combines that synthetic organic life, that state of being, that state of being human is the definition of artificiality, at least the way that we use it.
00:38:58:13 - 00:39:35:00
Unknown
And so that's why we focus on that. That's our that's our true purpose is understanding it. And a lot of that has to do with AI. That is the fundamental layer that that we we are viewing this world through. Now. There's longer term, there's much bigger ideas and much broader topics that fall into this, that we're spending time thinking a lot about, about what it means to, the future of our what it means to the future of self, what it means to the to the future of our minds, what it means to the future of our consciousness as we as we, you know, essentially merge with a synthetic, version of ourselves and
00:39:35:00 - 00:39:58:15
Unknown
a synthetic version of the collective and the sort of planetary scale intelligence. So our whole vision of artificiality is quite, quite broad and quite long term thinking, imagining summit itself. The purpose is to imagine a hopeful future with AI, and we've asked people to to come together and to set aside the the fears and the risks.
00:39:58:21 - 00:40:26:12
Unknown
As we say, we completely embrace those we understand those. We reject the utopian view of Silicon Valley, we but what we want to do is to bring together a group of creatives and innovators to help imagine what a hopeful future is so that we have a chance at creating that hopeful future. Right? And we don't just take the answer, and we're doing that from a very first principles thinking basis, which is what a humans actually need.
00:40:26:14 - 00:40:53:20
Unknown
Yeah. and and what do they want? so it's a, you know, we've got cognitive scientists, we've got people representing how children use technology. We've got, synthetic biologists who are right on the leading edge of understanding, how we even conceptualize intelligence and information and life. we've got, like doses, creative musicians, artists and and designers.
00:40:53:20 - 00:41:26:06
Unknown
And we're heavy on the design side because design is how things work. And it's it's the way of. So the output is tangible. We're going to produce a design philosophy, and we have some heroes in the field really helping us lead that take a leadership role in that process, exhibit an AI design, an expert, you know, long time experts in design and transdisciplinary, cross-disciplinary design thinking.
00:41:26:08 - 00:41:49:05
Unknown
If people are interested in joining this summit, how would they find out about you? artificiality at worlds like on summit, you'll see all the info on the, on the people. but we, we describe them as catalyze ers because they're catalyzing conversations. this is designed to be a small interactive gathering, not a speaker audience.
00:41:49:05 - 00:42:08:24
Unknown
So there's heavy interaction among the people who we're bringing, lots of time to get to know people and get to know ideas is about access to ideas that you just wouldn't otherwise have existed. But at the bottom of that page is a, is a is a form to fill in to request an invite. We're we're doing it just to make sure we have, you know, so we we try to have the right balance of people in the room.
00:42:09:01 - 00:42:25:19
Unknown
so I have a very big room. No, we don't have a great, but you look at it that we have, in terms of the people coming together, just rattle off some of the names Don Norman, who, at least in my mind, created the field of human centered design. he's there to talk about designing for a better world.
00:42:25:19 - 00:42:47:08
Unknown
Barbara Traverse key, a legendary, psychologist from Stanford and, Columbia talking about, spatial reasoning, which is a really fundamental challenge with AI and a fundamental advantage, a need for humans. Stephen Sloman, cognitive scientist from Brown, to talk about knowledge and truth. Jameer Hunt from Parsons School of Design, will help us scale the problem.
00:42:47:08 - 00:43:07:20
Unknown
You wrote a great book about scaling problems called Not to Scale. we have, Katie Davis, who studies kids and technology from U-Dub. Jonathan Coulton, I mentioned before and musician will be talking about, his sort of vision of writing and playing. Hopefully we'll we'll definitely get him playing. we'll see whether he writes a song.
00:43:07:20 - 00:43:32:00
Unknown
We'll try and push him a little nudge him a little bit in that direction. John Passmore, who is, building a, an LLM that is specifically trained to better represent black and brown culture, called Latimer. we have Adam Cutler, distinguished designer from IBM. And and Josh Lovejoy is designer from Google. They represent in our minds two of the real like real true leaders of AI design, at Big tech.
00:43:32:02 - 00:43:55:05
Unknown
And, Michael Levin, of course, who's a synthetic biology professor from Tufts. and then we have some others, we're doing some short things. There's actually, we know at least one, startup entrepreneur who will be launching and announcing his new company at the event. we probably have another one that I hope will actually also be unveiling what what what they're up to.
00:43:55:07 - 00:44:17:12
Unknown
So it's going to be pretty interesting. And this art project will be a very core experience of it. You'll be able to work with and create art along with Patrick. and, he and his daughter will be, talking at the event about their initiatives towards, nonprofits and raising, raising, charitable gifts through their art and this collaborative work with AI.
00:44:17:14 - 00:44:42:20
Unknown
Right. And I believe Patrick, he's part of rise. Do I have that right? Yes. Yeah. That is and they do amazing work where they support women and children initiatives. he's been creating work that supports women and children for quite a while, and I'm thrilled that he's part of it. Yes. I'd also really quickly before, in about five minutes, I'm going to open up questions.
00:44:42:20 - 00:45:06:17
Unknown
I see a lot of questions popping up in the chat for you, but Helen, this is for you. Part of your history is, studying and kind of predicting breakthroughs that happen specifically in science. I would love to hear your thoughts on what you see as potential breakthroughs coming in the next decade. oh. Right. It's a big question.
00:45:06:23 - 00:45:12:18
Unknown
I mean, there is no pressure.
00:45:16:23 - 00:45:41:21
Unknown
okay. So the the, Okay. How much time do you have? Oh, go. Do they give me they give me cliff notes. Yeah. I'll go to the areas that I think are the sort of the most promising. where will the maybe less of a breakthrough idea and more of, like, significant paradigm shifts, you know, where we can actually, you know, in, in the Cuny and like, actually like definition of a paradigm shift.
00:45:41:23 - 00:46:19:20
Unknown
We're going through a paradigm shift at the moment. And the way that we think about what life is, and that has real consequences for, the development and use of AI because, they're it's very hard to, you know, we, we, we sort of we don't when it comes to figuring out what is actually alive and what life is, the science is going more and more towards life being about, information at its core and computation and replication is part of this.
00:46:19:20 - 00:46:57:20
Unknown
Now, a lot of this is coming from lots of different angles. It's coming from people thinking about the nature of like, origin of life, cosmology, that and, and, and whereas Hume where does agency come from? How can they possibly exist? Do we have free will? This sort of revolution in physics? But what it what it means for breakthroughs in AI is that we will get through this moment in time where we're kind of obsessed with large language models and transformers, and we will see breakthroughs and new ways of, and new types of models and new types of algorithms.
00:46:57:22 - 00:47:20:19
Unknown
you can see the sort of the pressures on the transformer architecture at the moment in terms of the, the nature of the compute, the question of scaling laws, you know, you just feed it more data. Will AI AGI just pop out? No, I don't think it will. so will they will be a breakthrough. I think, you know, another way of thinking about the architecture of AI.
00:47:20:20 - 00:48:11:12
Unknown
So that's one the second one is AI. on the human side, we would we're not there yet, but I think we'll start seeing a bit of a breakthrough and and a theory of consciousness. There are there are many theories, right now. This sort of leading one. Sort of not. But we kind of want to understand where the consciousness is an outcome of being alive or whether it's an outcome of information processing or whether it's always been there from the beginning, as some people like to think, and we're really wrestling with some deep philosophical questions in science around this, around how we think about subjective experience versus an objective reality.
00:48:11:18 - 00:48:44:03
Unknown
So it goes to the core of how we understand reality. And something will break in there. There'll be a breakthrough in there somewhere. Now, for most people, that's kind of like what? Who cares? And just give me my mushrooms and let me move on. but the reason we care about it is that if we understand what it is that gives us, and a conscious, subjective experience, that mine is different from his.
00:48:44:05 - 00:49:19:05
Unknown
We know how the retina works with wavelengths. We don't. I don't know if his version of red is the same as my vision of red. But if we get to the steer, if we get to this point where we have these theories of grounded and landed, then I think what will happen between AI and the, the, the biological side is, will do the same with consciousness as we're doing now with intelligence and as we did with bias, which is we'll build systems that, allow us to explore how much we understand it and testing.
00:49:19:05 - 00:49:40:16
Unknown
And it'll be just like it was with bias. We we put it out there in math and we can reason more, precisely about bias and society. And I think that's one of the reasons why we have so many more conversations about prejudice and bias and, and all of that. And so, so one of the reasons is that we have the data on it now.
00:49:40:16 - 00:50:08:02
Unknown
We're able to reason about it now where the word intelligence is dematerialized because we're building it. And there's Fineman said, if I can't build it, I don't understand it. And so as we get these theories of life, these theories of consciousness, these new theories of matter, then all of that will will test those by trying to build them, try to build life.
00:50:08:02 - 00:50:35:21
Unknown
We'll try to build consciousness and it won't just be in labs, it'll be on devices, it'll be in apps, it'll be in, you know, you know, the neurotechnology or the the next AirPod will be a, a a thought to text or AirPod. I won't say, hey, Siri, put on my podcast. It'll be I just think, sorry, I'm not sure who was speaking.
00:50:35:23 - 00:51:07:19
Unknown
right on cue off. Oh my God, that's brilliant. So it'll it actually gets really tangible really quickly. So what sounds really far out there and like, kind of sci fi. These technologies that are actually doing this now, these theories and this empirical work is happening now. Apple patented the next AirPod to have, an EEG that is thought to take that is there that is so that I don't have to say, hey, Siri, turn on my podcast.
00:51:07:19 - 00:51:14:08
Unknown
I just have to think it wow, sorry, I'm not sure this.
00:51:14:10 - 00:51:47:06
Unknown
Oh, my God, it's really good for Siri who name? I was doing it deliberately just for fun. Oh my God, that's really funny. okay, so I have one question I'm going to ask here from Denver on that he wanted me to ask before we open it up. he says in the world's interview, you talk about the incredibly valuable gift we have as humans to synthesize information, make micro decisions along the way which result in a human story or connection.
00:51:47:08 - 00:52:04:10
Unknown
He'd like you to talk about that a little bit more. What does that mean? Well, what is human connection? What is human connection? Well, you do the human connection. I'll do this the because you're the connect the guy. You go. At first, though, I think you said, well, I think I did say that it's okay. You think to say.
00:52:04:13 - 00:52:24:21
Unknown
I mean, what do we do? What do we do when we synthesize? I mean, you think about the I'll start with, what's the difference in synthesizing and summarizing? You know, summarizing is something that ChatGPT can do, can take a whole lot of facts and just sort of put them together and summarize. Synthesizing has a value judgment and then judgments come with consciousness and feelings and emotions.
00:52:24:23 - 00:52:46:09
Unknown
So as you synthesize and you make all these little like, you think how long it takes for an idea to go from my head to his head and then and what he does with that idea before he turns it back. To me, that is conviction that solving problems together is what connects humans. We didn't have to solve a problem or create an opportunity together.
00:52:46:09 - 00:53:12:01
Unknown
We wouldn't have a connection. Those it's doing those joint things together that makes the connection. But his brain is different than my brain, and I'd like to know more about that. I'd like to get inside that. You know, I think it's and it's all these little backwards and forwards, these little trading as we solve problems and the empathy that goes backwards and forwards and amplifies that bit right between us.
00:53:12:03 - 00:53:42:11
Unknown
She can't do that. They can't do that. We do that. And the fact that we feel something that matters, that's why consciousness matters. It matters that we feel something because the feelings guide how we make these judgment value judgments as we synthesize things. And the fact that he feels something and and I can detect that sometimes I can see it, sometimes I can smell it, you know, I can I know that you feel something like the fact he feels something matters to me.
00:53:42:15 - 00:54:05:09
Unknown
That's why reading something that's written by ChatGPT that someone else pressed a button on is boring, because we know that. That no one's struggled in, that we know that feel anything to do, that they just they just hit a generate me text button that we that. So I think that that process of developing human connection can a machine can be part of it.
00:54:05:09 - 00:54:33:19
Unknown
I can be part of that. But it can't replace it. And it certainly can't replace that process of synthesizing, because it's synthesizing is all about a value judgment. And a value judgment is based on how you feel, and that's based on your consciousness. Machines don't have that. It's why I'm not so keen on machines actually having consciousness. and I'm not that jazzed about the about us always feeling like a machine does have consciousness, even when they don't.
00:54:33:21 - 00:54:58:01
Unknown
And we know that that's what's happening. So there's a lot of gotchas in there. But that's kind of how I feel about that one. Okay, us thoughts on this? I mean, I think that the, the part of the question about connection is important because for me, because human connection is, it's important. It's how we live. It's how we create a collective.
00:54:58:01 - 00:55:20:06
Unknown
It's how we have a community. It's what society is. It defines culture. It defines who we are because we are part of who we are, as who we're connected to. Well, it's homeostatic. We die without it. Yes, and it is, but it, it and it is also, though, a mystery. We understand some of it. You understand on other, other parts of it, we don't understand why we are how we can actually understand each other.
00:55:20:06 - 00:55:46:23
Unknown
We can't actually define or or or describe how to compute, you know, empathy. some of that has to do with the fact that sometimes we don't even understand ourselves. You know, you can have a reaction to something and go, why am I feeling that way? Right? So part of this, you know, journey is understanding that how much that we do understand about ourselves already, some of it is discovering what we don't really know about humans and humanity ourselves.
00:55:46:23 - 00:56:11:21
Unknown
And as we try to extend humanity through a variety of synthetics, you know, synthetic life, we learn more about our own gaps, of our knowledge of ourselves. You know, it's like, well, like the recent work on, on on how our heart rates come into synchrony when we're making collective decisions. Oh, and to me, just seems like some kind of crazy magic, you know, it's.
00:56:11:23 - 00:56:34:04
Unknown
And, but on the other hand, it seems like completely intuitive because we talk about being in sync. I say, yeah, you know, we're on the same wavelength. So we we are our language. Our metaphors already take that into account. Science is catching up with whether that's actually happening. Wow. Interesting. Okay, I think that's a good place to stop.
00:56:34:04 - 00:57:03:13
Unknown
And I'm going to allow people, if they would like to, to turn on their cameras. We're going to move into some Q&A. You two have given me so much to think about. Thank you very much. and I've got while I am waiting for cameras to come on, I've got some questions here in the chat for you. So Devon asks, in your opinion, can you define a few of the things that you feel are already just superficial hype?
00:57:03:19 - 00:57:14:03
Unknown
And in contrast, can you define 1 to 2 cases or applications you're excited about in terms of AI, tech?
00:57:14:05 - 00:57:42:16
Unknown
Well, hype is at the moment generative search. Yeah. and it some people are finding it really useful. But then, then then they find that it's not useful and they change their mind. So generative search is is is not a fantastic use case. I think it can be made better. Will it take over how we, use the internet and how we search.
00:57:42:18 - 00:58:09:18
Unknown
Maybe if you talk to the people at Google about how they thinking about that, the way that works is if the responsibility for discernment and accuracy is very much with the user. So that's an entirely different user interface to portray that level of uncertainty and a result or what have you. So I think that's one where, where I think there's a great use cases are, is in science.
00:58:09:20 - 00:58:36:24
Unknown
we have let's just take a simple, a very simple example where there's a, how many scientific papers can people actually read like you just can't? You need that to be collated and summarized and presented to you at the level of comprehension that you understand, and that my level of comprehension on something is different than Dave's, is different than, you know, a PhD in a in a field.
00:58:37:01 - 00:59:01:19
Unknown
And we can find what are called, sleeping beauties. You know, we have papers that were where people have discovered scientific knowledge, 20, 30 years ago. And those papers were never cited enough, never became part of the network. But a large language, one can find those papers and those are often women, people of color, people that have never been an elite university, those kinds of things.
00:59:01:19 - 00:59:24:10
Unknown
So there's that's a simple use case where you can go and find rare or overlooked knowledge. Now, I think that's a really great use case. There are tools that people are doing for that, capturing human knowledge. People who are retiring. and we can think about how long, how much knowledge someone has at the end of the career, and then it just sort of goes away.
00:59:24:16 - 00:59:45:16
Unknown
Now, sometimes it's good we need natural forgetting, because otherwise we can never move forward to the, to the new. But if you're coming into those jobs and you really need to sort of what how did that person do that? so this is really useful to know. And then it can be faster to know and then adjust than it is to reinvent from scratch.
00:59:45:21 - 01:00:21:22
Unknown
So those are simple use cases that I think these things are really well designed for. They're terrific at combinatorial creativity. Like that's just a slam dunk. Now that doesn't you need the human judgment that says, well, that kind of cookies a bad idea. That kind of is a good idea. But these the ability to to these convex spaces where the, the, you know, if you imagine all of these different data points here and the machine can take them and fill in these places in ways that humans can't do, those are incredibly powerful exploratory use cases.
01:00:21:24 - 01:00:50:16
Unknown
you never need to write a template again. You never need to write a checklist again. You just have them all there. We starting to learn where these good use cases like personalized tutoring, but where the the big general foundation models aren't good. For that you need a design layer to help. So you if you're using ChatGPT to study for a math test, then you won't do as well on the test when you're not able to use it.
01:00:50:16 - 01:01:14:21
Unknown
It's a competitive cognitive architect, but if you use the tutor designed one that uses more Socratic methods never gives you the answer, but prompts you to think for it through yourself. and you actually learn faster and do just as well on the test. So this, this, this whole sort of world of what's a complementary cognitive artifact versus a competitive one.
01:01:14:21 - 01:01:40:04
Unknown
If you take it away, you can't do that work anymore. So by learning about how complex some of these use cases are, but I think there are some simple ones that that really work. I'll give you a couple the bigger level concepts, that I think are important, big enough. you also have to be building on. So like these first, as we talk, we talk about transitioning from the attention economy to the intimacy economy.
01:01:40:06 - 01:01:59:15
Unknown
So we spent the last, you know, ten, 15 years with our attention being captured. Right. That's where the whole premise of social media is. And that's monetize it by capturing attention, putting ads in front of us as we have more conversations with these tools, they're good. They are developing a more intimate understanding of who we are. And that intimacy is, a risk for sure.
01:01:59:16 - 01:02:19:24
Unknown
There's a challenge that part of the economy is difficult to how is going to monetize the the intimacy. But the intimacy is what's required for these tools to really work for us. and when they have truly even at their best, you know, sort of in-depth understanding, the best example so far is the change, at least where we've seen in the demos towards the Apple intelligence experience.
01:02:20:01 - 01:02:37:15
Unknown
So Apple intelligence already has a level of intimate understanding of you, given the fact of everything that you put on your phone. So it's able to understand who you're talking about, where you want to put things, all these different concepts and things that it can move things around. That level of usefulness is so much higher because it understands you so well.
01:02:37:17 - 01:02:56:01
Unknown
That opens the door towards quite a lot more sort of really interesting, useful cases on it. At an individual level, this is going to take time. This is a longer term thing. The other thing I'd say that I think, is that I'm excited about is the and I'm both this is both a I'm excited and I'm also fearful of is the use of agents.
01:02:56:01 - 01:03:16:23
Unknown
So that's I has agencies able to take action. Some of this reward we think is going to be required to have an agent to basically be your interface to the digital world, because the digital world is getting flooded with generative content. So as I floods the internet with, with bogus stuff, it's becoming nearly impossible to do.
01:03:17:03 - 01:03:34:00
Unknown
It's become harder and it will become nearly impossible to find anything you want. And so there's sort of a question about how much you're going to need the agent to go find the thing you want, right? To go find information for you. But once it does understand you well, it can help you not just find what you want, but screen out or everything else.
01:03:34:00 - 01:03:49:17
Unknown
We were on vacation a couple of weeks ago, and I sat there thinking, you know, I wish I had an agent that could actually just interrupt me when something really important or try to be out of the office. And what are the things like there's going to be some stuff that's really important that I want to know somebody, some passage, some news thing broke, whatever.
01:03:49:22 - 01:04:17:06
Unknown
But if I had a great agent, just the ultimate executive assistant, that would allow me to sit and look at the water for a week, but interrupt me when it's really, really important. Would be great. That's a one use case. But you can get a sense of how there may actually be a potential for this technology to abstract a lot of the, the tooling and froing that we do with the digital universe to be able to allow us to be more present in our day where we are now.
01:04:17:06 - 01:04:33:19
Unknown
I know that's a bit of a utopian point of view, and she's going to give me part time, but that's a long term vision. Again, that's like we try to imagine a positive, you know, I'll I'll take that little like, you know, like a tune out, except for the one thing I need to know. Yeah, exactly. Back to the beginning.
01:04:33:19 - 01:04:53:17
Unknown
You do that every time in your mouth. Like I would love it. Yes. Yeah, yeah, I would take that. That would be my cutting edge it too. Yeah. Zach and Lisa, I saw you unmuted. You have questions? the host ask me to unmute. Thanks for being here and speaking with us today. Thank you for inviting me.
01:04:53:17 - 01:05:03:17
Unknown
Other. You're welcome. It was very informative and interesting to listen. I have been looking for. I'm someone who's invested his whole area. Excuse
01:05:03:17 - 01:05:21:10
Unknown
me? His whole life in creativity. The. So I potentially stands to, you know, remove my livelihood. And I've been looking for the thing that I just I'm trying to imitate. And I came up with humor, spirituality as far as interpreting things in a spiritual manner.
01:05:21:12 - 01:05:46:12
Unknown
And then I got on to something called narrative therapy. And if you what that is about, it has to do with contradicting, rewriting your personal narrative, the narrative of your personal history. But now I'm thinking that ChatGPT would be a very helpful assistant in that process as well. I agree with you on humor being one of the last things.
01:05:46:12 - 01:06:08:05
Unknown
In fact, the last time we were talking to the head of the, the director of the SFI Institute, David Krakauer, I said, you know, there's a reason that humor is going to be one of the last things that I can do. And. Well, I like the idea of of coming up with things on the fly, you know, and we've all done it.
01:06:08:05 - 01:06:44:17
Unknown
We've all made jokes on the fly, like, without thinking beforehand and constructing a routine, just spontaneously transforming the situation with humor. And I guess that's something I could potentially be trained to approximate. Perhaps someday in the future, sooner than I think. Maybe. But it seems to be one of the less essential human verities. I think one of the key parts about humor is humor is very is highly content, context dependent.
01:06:44:19 - 01:07:02:03
Unknown
And we like to say the context is everything, everything about how you how you say the way we're presenting ourselves today is based on the context of this conversation. It's going to be different, you know, when we're on stage somewhere or something else giving a keynote. and so every time you, you're, you're interaction is about the context.
01:07:02:03 - 01:07:27:12
Unknown
And today at least I has extremely limited understanding of our context today and large language models, you have to input it. You have to tell it all kinds of things in the preface of the prompts. It makes it really, you know, and then you have to start again with every new one. And we're sort of thinking about the context we're giving it, and people get excited about, well, I can tell it to pretend to be this, that and the other thing, and that's really crude and it's really basic.
01:07:27:12 - 01:08:05:15
Unknown
You know, when I look at the potential and it takes us statistical context as well, it it will always, you know, with the current architectures, will always take a, a statistical context. So anything that's slightly unusual, will be difficult. And a lot of context is always got that. Yes. But and, you know, if you've ever been a performer, you know that every time you stand on stage, there's a, there's a context and there's a, there's a, there's a vibe to the room that you're reading and performance, and there really isn't any way to describe that to anyone.
01:08:05:20 - 01:08:41:10
Unknown
There isn't any way to relate that to intuition, maybe telepathy, but that's going too far for some people. But, it's depend on the audience. How funny you are, how good the effect you have always, I suppose, depends on the audience. Sure. Yeah. And the audience? The audience is is noisy. Not not loud. Noisy. It's noisy in terms of the there's a randomness to how it shows up and, that is, that is, that is the thing that, you know, it's almost the ultimate unpredictable situation for AI.
01:08:41:12 - 01:09:05:17
Unknown
for an AI and a human to deal with, which is going back to where we sort of started from an observation. Yeah. Well, I mean, I guess I, I doesn't have that capacity or maybe only in a very limited sense. on sometimes humans born too, right. skilled humans definitely bomb. it's it's the same. Yeah.
01:09:05:17 - 01:09:26:13
Unknown
There's a little bit of an 8020 here and that we are seeing with the language models and with the multimodal models, that part of what they're doing is developing a sense of context, much more so than the predictive AI sort of ten years ago. But, that doesn't mean that they have a minimum viable level of context for comedy.
01:09:26:15 - 01:09:56:09
Unknown
and they've got a minimum viable, livable level of context for writing a, you know, a poem for someone's birthday or a haiku in the style of Barack Obama or, and Taylor Swift or something like that. And I see or I mean, I imagine they could, but could you say come up with a comedy routine in the style of, you know, Bill Cosby, Joe Rogan, whatever comic you want to and have it produce an approximation that'll approximate it, approximate that any good.
01:09:56:09 - 01:10:12:24
Unknown
Does anyone want to look at it? That's really not yeah. Probably not. Well, it depends on the model. Depends on the time. It depends on what you know how the model is performing on a particular day, you know. So it was I mean I play it I play around with some of these models to try it in terms of voices that I know.
01:10:12:24 - 01:10:37:03
Unknown
Well, even yesterday I was taking something and having the comparing between ChatGPT and Claude and writing ready to rewrite this in the style of Bruce Springsteen and, just kid from Jersey. I know Bruce pretty well. So, and you're strange. Strangely enough, Claude was terrible. and it was much better, and I that was. That would have been the reverse in my experiment, you know, a couple months ago.
01:10:37:05 - 01:10:57:09
Unknown
so that the behavior, is, of the tools is still quite variable, highly unpredictable. Yeah. Lee. So do you have any questions for our guests? We're at 111. I'm probably going to start to close this down about ten minutes.
01:10:57:11 - 01:11:26:13
Unknown
Hi. Thank you. That was that was really interesting. I love hearing and learning about these types of things. I think that that Heather can probably guess roughly what I'm going to say, because whenever I'm allowed out in public, I see a version of the same thing, which is, about the East and the West, and ho and I always say this, but I think it needs to be said about how the North American, North and European English speaking hegemony tends to dominate so much.
01:11:26:13 - 01:11:53:00
Unknown
It has throughout history, and a lot of the discussions around it, are different from the conversations that I hear in Asia. I've spent half my adult life in Asia, and it seems like the Asian viewpoint is there. There's a bit of divergence here in terms of where they think it's going and where they would like it to go because of the, the the historical dominance of the materials, the ideas and the thoughts that are recorded.
01:11:53:00 - 01:12:20:24
Unknown
The way they recorded. and that goes down to everything from, from, from, from what's considered facts down to values. Of course, religion largely drives culture. So there's a slightly different viewpoint. some of it's similar, but some of it's different. I have you researched, that difference between the eastern and the western hemispheres of thinking, and if not, would you like to?
01:12:21:01 - 01:12:59:03
Unknown
We've done some research on the surveillance, the attitude to surveillance, which is stock and, and, and, and kind of arresting, we haven't, we haven't done any particular research about that. Different other than recognizing that there's an obvious schism there. from the perspective of, there's there's almost sort of well, if you talk in China, in the US is two different needs, but if you're talking Asia more broadly, there's this difference between, you know, what's obviously digitized and captured in data.
01:12:59:03 - 01:13:34:02
Unknown
So that's starkly different. But the entire philosophy of the collective versus the individual is a really big deal here. And, there's it's it's it's weird in the sense that, that as we digitize more and more of our Western ideas, our Western, developed economy, sort of Western philosophy, these and we realize how much we put as on the individual when it really is about the collective and the collective knowledge.
01:13:34:04 - 01:14:03:16
Unknown
And I think that that's one of the, the things that we'll that will all start to kind of grapple with is that these tools are going to present to us the collective unfelt of humanity with a English skew. But, in doing so, we'll almost see how us as individuals are just part of this collective in a way we've never seen before.
01:14:03:18 - 01:14:41:02
Unknown
So I think it's going to be much more disruptive on the psyche of the West than it will be on the psyche of the East. That pleases me to hear you say that and gives me reassurance. I think that, the as Helen said, we've done some work on that topic. I think I'm always kind of wary of, of, how how much I can expect to understand another culture and apply it to something like this right from the outside, you know, and I spent enough time looking at some of these cultures over the years past, and all the different technologies were sold into these things because I was just saying, he doesn't know
01:14:41:02 - 01:15:07:15
Unknown
what he doesn't know. Yeah. And that's why I'm wary of that. And I think it's hard. I think it's difficult. I'm glad to hear that you spent time living. There is a totally different viewpoint than those of us from the outside. And I'm I, I'm always sort of cautious if people make general, broad generalizations. Now, I will say that one of the other things that we have noticed, though, is that this isn't just a, an east west, you know, an English speaking internet versus a Chinese speaking internet, those kinds of things.
01:15:07:15 - 01:15:29:12
Unknown
There's when you get down to a very, you know, it's smaller or more distinct cultures, there's, you know, very big differences, too, as I mentioned before. John Passmore is building Lattimer as an alum to represent black and brown culture because, he felt that those cultures were sort of, not well represented in the, in the global level realms.
01:15:29:14 - 01:15:59:16
Unknown
some of the data that's inside of, Lattimer where that has been used to train Lattimer is, proprietary data that comes from, you know, academic databases that that doesn't exist on the public internet. That doesn't that that can't be present in ChatGPT. It's literally is not there. we had a we've spent some time looking, at indigenous cultures, and have had some, had a very interesting podcast episode with, with an expert in, Murray, indigenous digital rights.
01:15:59:18 - 01:16:26:12
Unknown
and, that's particularly, you know, a, an interest of ours, as you can probably tell, you know, Helen's from New Zealand. And so we had a lot of sort of focus on that and interest. But when you get to that level and thinking about, hearing him talk about, how the, the Maori people think about, data and how culturally it's only in the, the younger people now who feel like it's okay to actually have their picture taken, and digitized.
01:16:26:14 - 01:16:53:11
Unknown
when you hear that moment, you go, well, okay, so there's generations of people in that culture who have not been photographs. Well, that's make it kind of make it hard for you to have a digital representation created through all of them. some of the things that he expressed in terms of common views of data and intelligence, across the indigenous population, populations, how they think about culture and community and, connection to the Earth is fundamentally different.
01:16:53:13 - 01:17:18:18
Unknown
than what you'd consider the, you know, the broad based Western thought, because, you know, as you say, it's, you know, that Western thought is dominated by you know, generally English speaking white people. you know, that's where it and a lot of the data that exists in the, in the internet, has, is dominated by that culture, because those are the people that were on it first and digitized first.
01:17:18:20 - 01:17:40:14
Unknown
and so we have this huge challenge of actually not having cultures that are smaller, less representative, being lost in these large scale models that are dominated by particular cultures. One last little thing before we jump in is that, you know, we had the same experience. I was I was I was kind of surprised that actually the device over there recognized Helen's voice.
01:17:40:20 - 01:18:01:00
Unknown
I'm not going to say it's name because I don't want it to wake up again, because actually, for years she's been saying that it never understands her because she's got a Kiwi accent. Well, there's only 4 or 5 million people in the world who have who have an accent like hers. So guess what the the you know, the voice recognition systems just aren't that good, you know, because data, I haven't heard enough of it yet.
01:18:01:02 - 01:18:25:23
Unknown
That's, that's that's just one small little data point about how challenging it has to have these global planetary scale data systems represent what is an extremely robust and individualized culture across our, you know, humanity. Absolutely. And hard relate there Helen on the accent I'm Scottish. There's 5 million of us and we don't get recognized either with our accents.
01:18:25:23 - 01:18:53:21
Unknown
Right. You probably have to put on an extra brogue because all it's learned is Sean Connery, you know. Oh, this land is. Yeah, I've learned from movies. Come on, Billy Connolly. Yeah. Billy Connolly there. Billy Connolly. Yeah. It's interesting because I, I get frustrated when, people do do research in other parts of the world. And what tends to happen is, is they tend to talk to people who are English speaking, because I do speak several languages, and I think that gives you I always have these arguments with people saying, you don't need to speak the language to understand the culture.
01:18:53:21 - 01:19:29:02
Unknown
And I'm always really like, for real? For real. Are you kidding me? Because this, you know, you you know exactly why you're very, very clever people. But when people do go and do research, you're relying on intermediaries and interpretation of that language. So if we can even misunderstand each other, as you were talking about earlier, when we're using the same language and we have the same cultural sort of norms and values, and then you're trying to extrapolate meaning across several different interpreters and the, the gap and the distance between what you're observing and what the reality is becomes even greater.
01:19:29:04 - 01:19:47:23
Unknown
I think before there was digitalization, there was a lack of information in these cultures, because if you're studying medicine, for example, in law, you're not going to find a lot of textbooks that were written by Lucian, you know, medical professors there. And if you don't read English, you're relying on translations that are twice as long by the time you translate.
01:19:48:03 - 01:20:08:05
Unknown
And there's a cost. So even pre digitization, there was a smaller voice, just even with textbooks, across the whole world. And it'll be fascinating to see how people like you, help shine a light on some of those discrepancies. But I don't underestimate Asia. Asian people are on it as well, and they're going to make sure that they're heard.
01:20:08:06 - 01:20:38:02
Unknown
It's a lot of smart people want it. Yeah. And we want we actually want more complexity here. You know, we don't want just I mean, English has simplified the world. in a it's more simple. It's easy to, to get around and, you know, we can all travel. No, only really speak English. and with Google Translate or ChatGPT, you know, you know, the requirement to learn a language is less.
01:20:38:04 - 01:21:15:09
Unknown
It doesn't work with Thai. It is immutable. learning the language is actually higher because you get the actual you know what what we built, what we build in our mind when we learn that other language is the representations of that culture. And, you know, the, the they are much richer than just the, the, the, the, the constraint that a language model takes and generates new text out of, it's interesting to you know, it's one of those jobs that is, you know, you could say that all translate isn't going to go away.
01:21:15:09 - 01:21:35:10
Unknown
Well a lot. Well, but when you really need to translate it, like when you really need a deep translation of, of the of the meaning of the, the actual, you know, the actual meaning of the language, the meaning of the culture. You want to be able to rely on any of these tools. You.
01:21:35:12 - 01:22:02:23
Unknown
Yeah, yeah. Thank you. All right, guys, I we are at 122, and I feel like this is a great place to stop. I really want to encourage people to go check out artificiality and the imagining summit happening here in bend, Oregon this fall. it's October 12th through the 14th. Thank you, everybody, so much for joining us.
01:22:03:00 - 01:22:20:10
Unknown
I am so thrilled to have these fantastic guests. And thank you to Lisa and Zach for your wonderful questions. Thank you. Thanks a lot. Thank you. Thank you, everybody, and have a wonderful week. You too, to.