- Not A Bot
- Posts
- š£ļø Expert Q&A Series: @xlr8harder
š£ļø Expert Q&A Series: @xlr8harder
@xlr8harder discusses whether LLMs can think and effective accelerationism
Greetings, fellow humans. š
This is Not A Bot - the newsletter about AI that was definitely not written by AI. Iām Haroon, founder of AI For Anyone, and today, I'm excited to share the second installment of Not A Bot's Expert Q&A series.
Today's guest is @xlr8harder, who runs one of my favorite AI-focused profiles on Twitter. You may recognize him from his rainbow-colored sunglasses-wearing cat avatar.
During our chat, we cover a range of topics, including:
Whether LLMs can think
What effective accelerationism is
What differentiates this tech revolution from previous ones
Who xl really is... (and is he a cat?)
This was one of my favorite conversations yet, and I hope you enjoy it as much as I did!
Let's jump into it...
@xlr8harder discusses generative AI
xlr8harder Interview
[00:00:00]
Haroon: You can just tell everyone a little bit more about yourself and whatever you're able to disclose.
xl: Sure. Yeah. I work in AI. I've done some time in big tech. I've got a Twitter account where I tweet about AI and effective accelerationism. I think there's a lot of change coming in the world right now, and part of what I try to do with my account is communicate about it a little bit because I think people aren't really prepared for how much change is coming.
Haroon: In terms of effective accelerationism, can you tell us a little bit more about what that means?
@xlr8harder explains what effective accelerationism is and why he has such a positive outlook on the future of technology and society š¤©
ā Haroon (Not A Bot) š¤ (@haroonchoudery)
4:59 PM ā¢ Feb 10, 2023
xl: Sure. Effective accelerationism is a philosophy about techno-optimism. Basically it's based on the belief that we can create abundance for every human that's alive right now if we have the will to follow through and do it. It's has some roots in physics, in thermodynamics about the nature of existence itself. Favors certain approaches [00:01:00] to building a civilization. But mostly we try to stay more focused on the more practical things about just building the people who need to get out there. They need to build things. It's a really exciting time to be alive. There's so much happening and so much change that we're about to go through, and we generally believe that it's gonna be an enormous net positive.
Haroon: Interesting. I love the optimistic approach there. I think a lot of folks are fearful of AI technologies and other cutting edge technologies that are revolutionizing industries and society as we know it. What would you say to those folks who don't take the optimistic approach and they're more so fearful that AI's gonna take their jobs and they're just gonna be left with difficult circumstances as a result.
xl: I'm not gonna say that it's always going to be easy going through this transition, but humanity through their entire history has dealt with new technologies changing the way that things [00:02:00] work. The classic example is the buggy whip manufacturers when automobiles came around, like you didn't need buggy whips anymore.
But we have social safety nets now, and I don't think there's a situation where people aren't going to be able to find a job over a long period of time, because I personally don't expect AI to completely displace most jobs. I think it will, people will work in conjunction with AI. Their productivity will be higher and there might even be more demand.
There's an example. Let me see if I can think of it. Oh the cotton gin. When that was invented people thought that they wouldn't need workers anymore, but what actually happened was it brought the price of cotton down so far that it was much more accessible and demand for cotton actually went up.
And so there's a lot of uncertainty in the future and no one can say exactly how it's gonna play out, but I believe it's something that, that [00:03:00] we will handle and we will all be so much more well off than we were to begin with.
Haroon: So in your opinion, it's gonna be a net positive. Where we may go through some growing pains, but ultimately we're gonna be in a world where, as you described it, there's gonna be abundance for everyone.
xl: Yeah, an enormous net positive. That's where I believe we're headed. Yeah.
Haroon: And in terms of preparing for these next couple of decades, maybe let's just say the next decade, do you have any advice for folks who want to prepare themselves to go through these growing pains and they want to make sure that they're taking care of themselves and not being adversely impacted?
xl: Yeah, I think that just being psychologically prepared for a time when there's gonna be a lot of change is probably the biggest thing that anybody can do. There's gonna be times when it probably seems scary how fast things are changing. That's where I think we're headed anyway.
And and there's gonna be people that are gonna [00:04:00] capitalize on that for political reasons and try to create fear and division because of it just like any other issue that comes to absorb the national attention or the global attention. But I think keep your eye on where we're headed and have some faith that your fellow humans will help you get there.
And I think there's gonna be a lot of growth. There's a study from one of the big consultancies, the name escapes me right now, but where they think that, I think I have this figure right, over the course of about a decade and a half, that AI is going to add like another 16% to the global GDP. And that's a huge deal over a short period of time.
And I think that's really only the start of it. And if there's that much growth happening, there's going to be a lot of opportunity for people regardless of who they.
Haroon: And in terms of the different tech shifts that you've described, you mentioned the cotton gin as being one. What distinguishes this specific tech [00:05:00] revolution that we're going through? Is there anything in particular that may make it any different, or would you say that it's similar in terms of the magnitude of the impact it's gonna have on society?
xl: I think that there's probably only maybe two things in history that are gonna be comparable, and that's the industrial revolution and maybe the invention of agriculture. I think this is gonna be a fundamental shift in the way humans live and work and play. And I think it's going to happen shockingly quickly.
You pay attention to this stuff as well. You see the rate at which new things are coming out and stuff that seemed like it was just barely working a year ago is now very impressive. And I think that there's a good chance we continue on that kind of trajectory where things are moving faster and change is happening at a rate that is ahistorical.
Haroon: Has your opinion on the future path that AI and its impact on society is gonna take, has that changed over the past [00:06:00] couple of years with the release of technologies like ChatGP T, and other types of LLM models and what not.
xl: Yes and no. I think that people that have spent time, thinking about this and researching it, knew that at some point we were gonna get here, or at least believe that at some point we were gonna get here, and it was really more a matter of when. And so I'd say the thing that would've changed for me in the last few years is the "when" seems like it's gonna be now rather than someday in the future.
Haroon: In terms of the technologies that were released over the past couple of years, what has caused the biggest paradigm shift, in terms of technologies?
xl: I think the large language models most recently ChatGPT. I think that those were much more effective than people really thought they were going to be. I think the results we got were well beyond what people reasonably expected to get out of that sort of approach. [00:07:00] And but I think, the thing that I think really said to me that something different was happening was when DeepMind first defeated a global Grand Master Go player. I had paid attention to Go over the years and knew that it was one of those things that, one of the few things that humans could kick butt on the computer still. We'd solved chess in the nineties. I think that was when that happened with Deep Blue.
But Go was seen as this intractable thing because it was such a complex game. And so when we were able to go and play a global champion at that and actually defeat him pretty soundly and even play some interesting moves that people hadn't seen before, that's I think when it really clicked for me that oh, this is something new.
This is different. This. Really exciting. I actually watched some of those games live because I was pretty excited about it at the time.
Haroon: So I was listening to a YouTube interview the other day, and basically they were making a comparison [00:08:00] between AlphaGo beating Lee Sedol, and LLMs hitting the mainstream. And they were comparing the hype around AI technology back then with the hype that we're experiencing now. But the person in the interview, they were basically using it to level set the hype and say, yes, this is a big sort of step function increase in the generality of AI models, but it's gonna eventually be normalized and we're eventually gonna realize that maybe it was a bit overhyped and AI hasn't progressed as far as we may think it has progressed. What do you think of that sort of argument?
xl: Sure. There's certainly a possibility that we stall out at some point. We might hit the, at least for a while for what our techniques are capable of. It's really a question of: are we following an exponential path or are we following a series of s-curve shaped growth where it starts out slow and then rockets upwards and then [00:09:00] levels off again, and then you just overlap a bunch of those.
And it can look a lot like an exponential curve, but in reality there's a lot discovery, exploitation cycles happening. I think that the one thing we can say is that if we are in a cyclical expansion of technological capability, that I think the cycle time is getting shorter. So even if it does stall out, I don't think that it's gonna stall out for very long. I think there's too much value here that's been demonstrated, and I think that the amount of investment that's going into it is going to probably sustain the rate of innovation or not let it falter for very long.
Haroon: In terms of other technologies, other developments in the AI space that you're excited about, is there anything else that stands out?
xl: Yeah. DeepMind's AlphaFold result. That's a protein folding AI. Protein folding was another one of these things that scientists had been trying to deal [00:10:00] with for a long time. They couldn't find a way for computers to do it. There was even, I remember reading at one point, I don't know, in the last 10 years, they made like an online contest where humans would come and they would try to figure out how proteins folded because people were better at it than the computers were.
And DeepMind basically released a result that nearly solved it. For any amino acid sequence, you could specify, it would say how it folded. And the reason why that matters is the way a protein folds affects how it interacts with your body biologically. And what that means now is we have this library of all of these biological components that we're gonna be looking into for targeted drug delivery, disease therapies, all kinds of things, and I think we're gonna be reaping the benefits of that for a while, and I think that's a pretty exciting result as well.
Haroon: Okay xl, I am gonna jump into the tweet segment here, and [00:11:00] basically the idea is, I wanted to dig into a few of the tweets that you had either liked, retweeted, or tweeted yourself and get you to explain a little bit more and add a little bit more context. So you set up a poll on Twitter and you asked, when it comes to AI ethics, is it better for AI to reflect our culture as it is or an idealized fashion? Do you want to explain a little bit more about why the question was posed, what the sides of this discussion might look like?
xl: Yeah. So there's the field of AI ethics is a field that tries to figure out when we build these systems, what sort of effects do they have on the world, and how do we need to go about building them in such a way that we don't cause any unnecessary harm? And this is a political issue as much as it is a technical issue because what defines harm really depends on your [00:12:00] worldview.
And so I think there's an interesting question, which is if we're building these systems that are gonna interact with the world, what does it mean if they think the world is actually, or think I am using it loosely, but what does it mean if the information we use to train them doesn't reflect the way things actually are.
For example, there might be situations where the photographs they're using to train an image generator, think that all scientists are old white guys and people might be concerned that, we want to show more variety, we wanna show more diversity, and so we will then make sure that the training information has diverse information in it that to represent all different kinds of people.
But the downside to that is there might be situations when you're trying to make predictions about the world, you'd be making predictions based on information that doesn't [00:13:00] represent how the world actually is right now. So it's really a question of, do we teach these systems how the world is or how we want it to be?
And there's a tension between those two things, I think. And so that's what I was trying to get at with that question.
Haroon: And it gets really tricky when we're trying to train AI based on what we want the world to be in an idealized state because then it's up to an interpretation. Whereas the former option in your poll it seems much more straight straightforward. Is that correct?
xl: Yeah. And further, if you do decide that you want it to be a more idealized version of how you'd like the world to be, which ideals do you use to, to inform that? And then at that point it becomes a heavily politicized issue so I think we're gonna see a lot more on that in the future as these systems become more powerful and affect more and more parts of society.
People are gonna be more and more interested in this. And there's also a something to keep in mind, which is there's a feedback effect where they affect the world and then [00:14:00] that affects the input that goes into them to train them. And there's a classic example of a failure in, I can't remember the city, but there was a city that did predictive policing.
And what they were trying to do was figure out where do we need to send our police officers to catch the most crime? And what would happen is they would send police officers to an area, and so they would have more arrests in that area. And so then the system would prioritize that area, and it made a feedback loop where it was saying like, have the police here all the time because this is where all the arrests are.
And so we have to be aware when we're building these systems that not only are they making predictions, but then we're acting on those predictions and how we're interacting with the world. And then that feeds right back into the systems.
Haroon: You retweeted a tweet from Li Junnan. I hope I pronounced that correctly. And you said that this is a huge step up. And what Lee had talked about in his tweet was a new technique called [00:15:00] BLIP-2 , which is a quote, generic and efficient vision language, pre-training strategy that bootstraps from frozen image encoders and frozen LLMs.
Why do you think this is such a big step up?
xl: I think the thing that makes it such a clear step up, the technique actually isn't new exactly, but it's an improvement on the state of the art. And typically when we have these models, they've worked in one world only. They've worked with language or they've worked with pictures.
And this is a result that allows a language model to get information from a picture and talk about what's happening in the picture. So you could send it a picture and you say, what's in this picture? Why are the people doing what they're doing? And it can actually join information from both of those things together in crafting its responses.
Haroon: Gotcha. And is it as simple as, and not to undermine it, but is it combining different types of work that's being done in AI [00:16:00] research in silos, like the sort of text generation models and then also the image understanding models, whatever you might call them.
Is it basically taking those two concepts and combining them, or are those two tasks when you combine them much more difficult to pair together?
xl: Yeah. It's taking existing things and combining them but doing that is not necessarily easy. And so that's what makes it a, not only are the results interesting, but the way they did it is quite effective. I don't remember enough of the specifics on that one to get into the technical details, but the thing that really stood out to me about it was just how good the results were and the demonstrations. The ability to, they call this multimodal, different modes of communication or different modes of information, and I think that's going to be, it's something you're gonna hear more and more about in the future about multimodal AI models.
Haroon: The tweet says it's hard to have a genuine conversation about AI because people signal high status by being performatively, [00:17:00] unimpressed by LLMs.
Can you explain a little bit more about why you might agree with this take.
@xlr8harder@xlr8harder discusses the issue with LLM party poopers. šš
ā Haroon (Not A Bot) š¤ (@haroonchoudery)
4:58 PM ā¢ Feb 10, 2023
xl: Yeah. I think that the elephant room on this is the head of AI at meta is going to the press and talking at Great. Some great length about how ChatGPT isn't really a breakthrough and. Downplaying the results as if they aren't impressive. But the thing is that they are actually impressive Now maybe the exact techniques that they built on are not new to ChatGPT but they're the ones that are able to put them together and do the engineering to turn that into a product that they can ship.
And that's an important gap you have to cross. I tease Google a lot about not shipping any of their AI on, on my Twitter threads because they've got these amazing results that they, publish a paper about, but that no one ever gets to touch. [00:18:00] And I think that whether or not. What ChatGPT did was a scientific advancement.
It was certainly an engineering advancement and I think it's really captured the imagination of people cuz they can actually get their hands on it and they can use it. And so there's a lot of people that I think are. Just being reactive against the hype around ChatGPT and downplaying it. Like it's not that exciting because there's these other results that maybe look better in this one way or another way, but the fact is this is something that's there and it's in people's hands and it's happening at scale and that's impressive.
Haroon: In terms [00:17:00] of LLMs and the opportunities they present for folks to basically just speak computer programs into existence, right? We're getting to a point where AI is able to write your code for you and build powerful software for you. In terms of the opportunities, I guess you could say that LLMs present. Do you think that it's going to fundamentally change the way that we write computer code forever?
@xlr8harder on the impact of LLMs on code and the massive implications this will have š©āš»
ā Haroon (Not A Bot) š¤ (@haroonchoudery)
5:02 PM ā¢ Feb 10, 2023
xl: Oh yeah, absolutely. I think I use a piece of software called Copilot that comes from GitHub and Microsoft and OpenAI. That goes into your software editor and can predict the kind of code you're trying to write and write some of it for you. Now there's still some errors at this point in this sort of thing, but as an experienced programmer, you can easily identify those errors in my experience, at least with the code that I've written with it, and I feel much more productive.
And I think that, this is just a taste of things to come. I think more and more of the [00:18:00] code will be something that can be generated and programmers are gonna be working at a higher level of abstraction. They're still gonna be there. They're still gonna be involved. They're still gonna be looking at all the code and tweaking it, but they're gonna be able to stay working at the bigger picture.
And I don't think that's, I think things are going to keep heading in that direction and I think that's eventually going to make no-code and code- light tools available to more people that don't have the technical background that actual programmers do to build things without necessarily even knowing how to program.
I absolutely think that we're gonna see that.
Haroon: Satya Nadela a few years ago, he said something along the lines of, every company is a software company. Would you agree that every company is an AI company now, or at least will become an AI company in the next couple of years?
xl: Yeah, I think so. They said there was a phrase that was really popular about software used to eating the world. And I think AI is doing that and it's gonna happen a lot faster than it happened [00:19:00] with software. If you look at when the internet started getting into US homes in the 90s, what really started to get uptake then to where we are now. That's 30 years and it's pretty much changed the way we do everything. But it took three decades to really get here and I think with AI we're gonna see a similar kind of fundamental change to the way that we do business, to the way we do all kinds of things.
Whatever we're up to, AI is gonna be involved and it's gonna happen a lot faster than three decades.
Haroon: And in terms of AI being a differentiator, do you think that it's going to be more difficult for companies to differentiate based on AI? Since everyone has access to the same LLM models and the same pre-trained models do you think that it's going to more be more difficult to build that competitive advantage.
I'm curious how you think companies can build a AI advantage in this new age.
xl: It's a really good question. I think you know where we're at right now reminds me a lot of [00:20:00] the .com bubble and I think you're gonna see a lot of people building stuff that didn't make a lot of sense to build in retrospect. But, that's easy to see in hindsight and not so easy to see when you're there trying to figure out what to build.
But I do think that it's still going to be possible to differentiate yourself, and it's going to be especially in the parts of the process that aren't the AI itself. The systems around the AI used to build it, maintain it, inform it, the processes, the datasets that you build. Datasets are going to be huge for companies to learn how to leverage the information they already have but don't know how to use.
That's going to be a huge part of it, is figuring out how do we take this stuff and put in a form that we can use AI to interact with. And I think that beyond that, execution is still gonna matter, user experience is still gonna matter and those are still effective differentiators. Lots of people have ideas, not that many people [00:21:00] build them.
And of the people that try to build them, not that many people do it very well. And so I think, execution is always a differentiator. And I think it will continue to be.
Haroon: Well said. I'm gonna share one last one with you here.
The question you posed is, do LLMs think, and you stated that a binary yes or no answer is insufficient. And then you went on to say language capability alone is probably not sufficient to build AGI but my suspicion is that it is highly serviceable scaffold on which to build the parts that are missing.
Do you mind elaborating on that a bit more?
@xlr8harder@xlr8harder on whether LLMs can think or not...š¤š
ā Haroon (Not A Bot) š¤ (@haroonchoudery)
5:01 PM ā¢ Feb 10, 2023
xl: Yeah, I think it is. It's doesn't get quite as much attention as the. Exciting results that people see. But the problem is that it's a squishy concept. You know, what it is to think. When you get down to the details. These are hard problems I think a famous one of 'em actually call, the hard problem of consciousness how to determine if a system is conscious or not.
And I think we have similar problems when it comes to what thinking is. Thinking, [00:22:00] it seems easy to say what thinking is. When you're a person, you're just saying I'm thinking. But defining it in a way that's empirical and can be measured and compared is not as easy as it would seem.
In particular, I think our existing models of cognition and what it is to think are really not very good. I think that, to get at this question requires one to engage with metaphysics to some extent . And that's quite a can of worms to open is what it is to be conscious, what it is to think, what it is to have a self. I come from a non-dualistic spiritual background. I tend to take a view that's quite a bit different than the western view, which is that the idea of the self is an illusion that our brains build because it helps us engage with the world. And I [00:23:00] think that when we're trying to decide if a computer is thinking, we're projecting the way we think about ourselves onto the computer. And I don't think that really works very well. And . That's why I brought up the question of do LLMs think, because I think from one angle, they absolutely do, and then from one angle they absolutely don't.
And it's really hard to say where the truth lies. And I think it's just, as I say in there, it's not a binary question. I think we need to redefine those questions and ask them in entirely different ways. There's a famous example, which is: do submarines swim? They go through the water, but are they swimming?
And it at some point just becomes a question of semantics. Is this swimming or is it not swimming? And you can get whatever answer you want. And I think we need to think about machines thinking in the same way. Do submarines swim? Do machines think? I think they do, they already [00:24:00] are.
And we're anthropomorphizing them and trying to see them as humans in a box and they're just not. And so that's where I was going with that question I was trying to get at some of that.
Haroon: Super interesting. xl, thank you so much.
xl: Yeah, it was fun. Thanks for having me on.
And that does it for today's interview. Huge thanks to @xlr8harder for joining us and sharing his wisdom.
For more Q&As from leaders in AI, like Mark Cuban, CEO of Runway ML, and co-founder of Hugging Face (coming soon) plus daily AI news, be sure to subscribe to Not A Bot, the world's most subscribed daily AI newsletter. š
As always, thanks for reading, and see you next time. āļø
- Haroon - (definitely) Not A Robot and @haroonchoudery on Twitter
Reply