• Home
  • AI
  • The AI Compute Landscape & Web3

The AI Compute Landscape & Web3

Image

Space Summary

The Twitter Space The AI Compute Landscape & Web3 hosted by exa_bits. The Twitter space focused on the convergence of AI and Web3, shedding light on the pivotal role of artificial intelligence in shaping decentralized finance, blockchain innovations, and user experiences within the Web3 ecosystem. Discussed were the challenges, collaborative potentials, and emerging trends surrounding AI compute integration with Web3 technologies, emphasizing the need for scalable, secure, and privacy-conscious AI solutions. Participants explored the synergies between AI algorithms and blockchain, envisioning a future where smart contracts, DeFi platforms, and user interfaces are powered by AI-driven functionalities, enhancing the efficiency and user interactions in Web3 applications.

For more spaces, visit the AI page.

Questions

Q: How does AI contribute to the development of Web3 technologies?
A: AI enhances data processing, automation, and smart contract functionalities in Web3 applications.

Q: What are the challenges of integrating AI compute with Web3 platforms?
A: Scalability issues, data privacy, and AI model interoperability pose integration hurdles.

Q: In what ways can AI optimize decentralized finance platforms?
A: AI algorithms can improve risk management, trading strategies, and liquidity provision in DeFi protocols.

Q: How can blockchain benefit from AI-powered computing?
A: AI can bolster blockchain security, consensus mechanisms, and transaction efficiency.

Q: What collaborative opportunities exist between AI and Web3 developers?
A: Partnerships can drive innovation in AI-driven smart contracts, decentralized AI networks, and predictive analytics for blockchain applications.

Q: What role does AI play in enhancing user experiences in Web3 applications?
A: Personalization, content recommendation, and AI-driven interfaces elevate user engagement and retention in Web3 platforms.

Highlights

Time: 00:10:45
AI Empowering Decentralized Finance Exploring how AI technologies revolutionize DeFi operations and investment strategies.

Time: 00:25:58
Blockchain-AI Synergy for Web3 Innovations Discussions on synergies between AI and blockchain tech for Web3 advancement.

Time: 00:40:12
Privacy in AI-Driven Web3 Ecosystems Debating the importance of privacy protocols in AI-infused Web3 environments.

Time: 00:52:20
Scalability Solutions for AI in Web3 Innovative approaches to address the scalability challenges of AI in Web3 platforms.

Time: 01:05:33
AI-Powered Smart Contracts in Web3 Exploring the future potential of AI-driven smart contracts for Web3 applications.

Time: 01:15:47
Web3 User Experience Enhancements with AI Insights on leveraging AI for personalized and intuitive user experiences in Web3 interfaces.

Key Takeaways

  • Understanding the importance of AI in the evolving Web3 environment.
  • Exploring the relationship between AI compute power and blockchain technology.
  • Insights on leveraging AI for enhanced web interactions and decentralized finance.
  • The significance of efficient AI computation methods in advancing Web3 applications.
  • The potential fusion of AI and blockchain for innovative digital solutions.
  • Challenges and opportunities in merging AI compute with Web3 technologies.
  • Balancing scalability and privacy concerns in AI-driven Web3 ecosystems.
  • The role of AI in shaping the future of decentralized finance and digital currencies.
  • Emerging trends in AI compute utilization within the Web3 landscape.
  • Collaborative efforts driving the integration of AI and blockchain technologies.

Behind the Mic

Introduction to AI and Future Aspirations

Our channel it. Greetings, everyone. Super excited to be here again for another exhibits session, discussing today the compute landscape and talking about the future that we want to build with independent AI and how web three paves the way for building the future that we want to live in for our families, for our friends, for our children, our grandchildren. Because the reality is that AI is going to be the operating system of humanity in the very near future. And so we have to be vigilant about how we develop this. And we're literally living in the very infancy of these developments and of this major development. We're the pioneers. And so we're really at exhibits. Excited to be in this place, in space time, to be in this place in history.

Business Reality and Narrative Focus

And today I want to just kind of discuss the hammer and sickle aspects of our business and how a real business, with real assets, with real compute, and not just a narrative, is the way to a future that is the one that we want to live in when we focus on the narrative, when organizations focus on the narrative exclusively. The problem is that you have an inflated meme coin. At the end of the day, an over funded meme coin. And at exabits, we feel that this is not the way to democratizing and creating independent AI. So I just want to open it up to our guests here today. Any questions that you have in regards to exhibits and in regards to independent AI, everything's on the table, but what I'm going to discuss to start is the compute landscape.

Vision for the Compute Landscape

And so I'm really excited to kind of share this vision of the compute landscape as I see it, as exhibits see it. And again, we believe that democratizing AI is critical to creating the future that we want to build. And that starts with decentralized compute. Ownership of compute. It supports a world where the future of AI isn't just left to the agendas of giant corporations and governments. And so we want to give everyone the ability to have ownership of AI compute, to offer the people a voice in how AI is developed, deployed and operated. So let's talk a little bit about the let's talk a little bit about how AI has just absolutely broken records of adoption since the launch of chat GPT.

Comparative User Adoption Rates

Let's look at a few of our favorite apps of recent times. Let's look at Facebook. So Facebook took four and a half years to reach 100 million users. 4.5 years. Instagram is right behind that, or significantly behind that, actually. Or ahead of that, rather, at two and a half years. So it took Instagram two and a half years to get to 100 million users. TikTok nine months. But then chat GPT broke this record boombastically by taking only two months to reach this number. So I'd like to repeat that for you. Chat GPT broke the record by taking only two months to reach 100 million users, which is an amazing and telltalingen number statistic. And then if we look at the linchpin of AI, we look at the compute hardware Kings Nvidia, which are famed for the HNA 100s.

Nvidia's Market Position

It surpassed Apple and is right behind Microsoft, exceeding 3 trillion in market value. So in just a very short time, Nvidia surpassed Apple. And it's just right behind Microsoft and market value, which is an incredible, again, this is an incredible statistic and exhibits. We are the upstream provider. So what that means is we're very close to Nvidia in the market. We're very close to Nvidia across the space. Typically in markets, the downstream is where the value lives because that's where the money is being exchanged. The money is being exchanged in the downstream and this is typically where the value is seen because this is where individuals, organizations are paying money for said widget or commodity.

Value in the Upstream

And so being upstream, being the upstream provider like exhibits, is typically, that would be, the conversation would be that the downstream is the higher value and the upstream isn't as high value. But in this case, in the case of compute, the upstream is where the value is. Because of the utter scarcity of the chips, of the infrastructure, of the machines, of the clusters of H 100, a 100 RTX, four thousand ninety s and soon to be b supply chain is absolutely gruelingly difficult to increase the flow. So let's talk about the compute landscape. Now that we've kind of broken that out, let's talk about the compute landscape a little bit.

Core Centralized Compute Landscape

So in the core centralized compute cloud, compute landscape, that is the tier one organizations include Google, the providers include Google, AWS and Microsoft. These are the giants. These are the closed market data centers.

Hoarding of Computing Resources

These are the organizations that are absolutely and shamelessly hoarding these assets. If you consider, for example, the output Nvidia's output of h 100s in 2024, they'll produce, by the end of this year, they'll have produced about a million h 100 chips. And of that million h 100 chips, 90% of these are allocated. 90% of these are spoken for by Google, AWS, Microsoft and a few other inflated giants. And so that leaves a very low number available to other possible providers that everyone has to fight for below them. And so the tier two cloud compute providers include organizations like Lambda and Coral and Cloudflare. And so then this trickles down to these organizations. These organizations are a little bit more aggressive on price, but the scarcity still exists here. These are providers that have very limited resources and again, these are typically on the rental side are pretty much allocated. And in fact one of these providers is providing for Microsoft. One of these three providers is currently providing for Microsoft. This is how bottlenecked compute is at the current moment.

Centralized vs Decentralized Compute

And so on. The decentralized compute landscape. So that we've discussed the centralized cloud compute landscape. Now let's talk about the decentralized cloud compute landscape. And so we have organizations like Akash, which is a peer to peer network. We have Jensen, which is on the verification layer. So their model, which I don't believe that they've launched yet, but their model is at the verification layer. So they're verifying the compute and then aether. And aether is an aggregator, a giant aggregator for AI and gaming, and then ionet is a massive aggregator for AI and I believe other use cases. So these projects all are approaching decentralizing compute from the context of connecting devices from all over into a network. So that means your MacBook Pro, your gaming GPU, and then there's also organizations that even go so far as to say that they're accessing your phone, they can access your phone.

Limitations of Decentralized Computing

And this DPIn infrastructure is, this DPIn infrastructure can work for certain very, very low level inference. But when it comes to producing compute for serious AI training, inference and fine tuning, we, all, these organizations included, we can't. As sexy as it sounds, the narrative of aggregating random devices to a powerful compute network that serves serious AI training and inference workloads just doesn't, it just doesn't work for this at the time, at this time. So real AI applications demand enterprise grade gpu's from t three and t four data centers which are optimized for heavy AI tasks. So these are, it's also called AIDC. AI ready data centers exhibits, addresses this need by providing the robust and the reliable performance through optimized infrastructure.

The Role of AIDC in AI Applications

Ionet and Aether, they have this layer of decentralized compute and providing this geo distributed network. But when it comes to serving clients that are requiring serious AI training and inference, they utilize AIDC. And it's a peer to peer or a direct connect, it's a direct link, because if you think about it, if you have a serious AIH project, if you're, if say you have a GPT. Say you design and develop a jeep, a GPT, you're. And it's an app. You're. You're going to want that app to be online 100% of the time. If it goes down, then you have all of these users that you've acquired through all of these efforts and all of these resources that are now offline.

Service Level Agreements and Infrastructure Needs

So at exhibits, you know, organizations like Misha Lepton, even ionet, Akash, Aether, they require an SLA or they require a serious AI DC connection and infrastructure to serve these needs. And that is precisely what exhibits is about. And what we offer exhibits has, at scale, AIDC. So AIDC, H 100 clusters, a 100 clusters and RTX 4090 clusters at scale. And our vision is that the first layer is that we must provide before any other, before we provide anything else. We provided absolute quality, reliable, high performant GPU compute. That. That's the critical piece. If we're not offering that, then it's really just. It's just really air.

Exhibits Offering and Competitive Edge

You know, we're really offering air. And so the difference between exhibits and. And others is that we're offering the hammer and the sickle.

Partnerships and Compute Capabilities

And we have partnerships that we, like I said, ionet, a third Akash, we help to power them so they have the ability to offer this as well. And again, it's absolutely critical in our mission to offer this compute at scale and start with that. Exhibits is we're the only team in the web three space that can deploy, interconnect, and operate thousands of h 100 GPU's by ourselves. And how we do that is, first off, the core team at exabits has literally just been at this for decades, and we have all of these critical boxes checked. We offer the upstream compute. We're the upstream providers for ionet, Aether, cash, Heerest, hyperbolic Mishell, nebula block, and many more. And we have a massive supply, and I want to go back to supply.

Understanding Supply and Demand

The thing about supply is there's a lot of mixed conceptions about supply of GPU chips. And it goes like this. First of all, a relevant small cluster of H 100 GPU machines. And to define that further, an h 100 GPU machine is eight chips. And then if you consider that a small cluster is 32 machines, then this is upwards to $10 million to just buy the chips, just to purchase the chips. But here's the thing. As I mentioned before, the allocation of h 100 ships is already highly bottlenecked. The allocation of these is 90% of these chips are already allocated to the giants to the Google, to the Microsoft, to Microsoft, to zero, to OpenAI. And so of this small pool of chips that are left available, there is, even if you have the 10 million, even if you have the money to step on the scene and purchase these chips, the problem is that there is a very serious scrutinizing vetting process that you will undergo.

The Vetting Process

It's kind of like buying an aircraft. You have to have, not only you have to have the aircraft to fly the aircraft, but you also have to have pilots, you also have to have a hangar, you also have to have a maintenance team, you also have to have a Runway, the airport itself. And the same is true with this, and the same is true with the fact that you're not just allowed to go buy an f 16, you're not allowed to just go buy a commercial line or jet. You're going to be vetted to do that. And it's the same with GPU's. And why is that? Because AI is the most transformative technology that will be the operating system of our future. And so this vetting process is very serious and it's very extensive, and very few will have the ability to push through this vetting process and actually get the chips.

Post-Procurement Needs

And then when you have the chips, what next? You've got the chips, you go through the vending process, you've got the chips. You need infrastructure and a team to support the operation. And that's why it's highly notable that exhibits provides is the only team in the web three space that can deploy, interconnect and operate thousands of H 100 GPU's just by ourselves, the exhibits crew. And you know, the thing is that or the pieces of the puzzle are again, where the, we have the largest supply available across AI data centers, across AIDC for enterprise compute, meaning serious AI training and inference, not just theoretical or academic.

Middleware and Cost Advantages

So then on the middleware side, we have an acceleration, stabilization and optimization software set that enables maximum performance and stability across AI compute and so on top of that. So we fit in this very unique space where we're not absorbing the brick and mortar data center costs like the cooling and the cooling the machines, the personnel, etcetera. And so we have the ability to offer massive savings to our customers. We're often compared to going back to the different tiers and comparing us to the high level tiers like Google, Azure, et cetera. We often can beat the pricing by 80% and push that value to our customers.

Questions and Community Engagement

And so these again are the. I'm going to get to some questions here. I've got some questions that are and our community chat on Telegram. And I just want to give a quick shout out to Miao, who is our community manager and who is making all this possible, who is behind the scenes managing questions, managing our guests today, I am the exclusive guest, but she's absolutely killing it, and we appreciate her so much. And so on. The question about security and how we handle security, it's a very, it's a complex process that we take very seriously, and we utilize Kubernetes and other high level technologies to provide security across our network.

Technical Questions and Team Support

This is a question that I will push to our tech team. Being not highly technical, this isn't exactly my forte. So I'm going to push this to the technology, to our tech team. And Beckman, I see you in the chat, and I'll make sure that this question gets answered in detail for you. My apologies for not having a super succinct answer. And then also, I think that's the only question at the moment. I'm going to check the.

Concluding Remarks

Check the chat. All right. Yeah, so that's the only question.

Introduction and Commitment

Yeah, thanks. Just let me know that's the only question for now. So. Yeah, so that question, Beckman, I'll make sure that gets answered. But, yeah, so going back to just the subject at hand at exhibits, we first come from a data center domain expertise background. And that is the core team. That's the core. And we have partnerships that run deep in the space that enable us to procure the compute power across AI data centers across the world to provide this compute in a very unique, high value, cost effective way that is exclusive to exhibits.

Emerging Potential

And really, the moment we're a dark horse, you know, we're just gaining the traction that the world deserves to see for us, because the more traction that we get, the more opportunity for independent AI. I mean, it's like, look at it like it's a seed. You know, like we have planted a seed in the world through our domain expertise, through our partnerships, through our ability to push through all the bureaucracy, all the vetting, to have access to AI compute, offering it in a way that supports literally anyone who wants to build an AI. And so that said, I'm going to close the session with this exhibits.

Community Engagement and Mission

We value our ogs and we value so much our community. And our community knows what our mission is. And our mission is to support independent AI, to support the ability for builders everywhere, anywhere to have access to AI compute in a real way. We're not just a narrative and the thing I want to expand on that a little bit in the beauty of cutting edge technology spaces is that it's a creative space, right? It's a space where developers and great minds can come together on an idea which ultimately is formed into a narrative, and then through resourcing, is able to launch as a valuable part of the world.

Vision for Real Compute

And I think in the web, three space. One of the things that we, I believe are coming out of actually, but is really present in space, is kind of hanging on the narrative, kind of hanging on the idea of a thing, as opposed to being synced with reality and moving from that very prudent space. And so exhibits. We want to support a world where real compute is available to real builders everywhere. And that is our core mission. And we want to give the opportunity and the availability of compute so everyone can participate.

Conclusion and Gratitude

And so with that, I actually might have a response. Let's see. One moment. Yeah, so one of our tech people is going to respond. Backman, we're really bullish on your question, and we want to make sure that we answer it so that will be answered in the chat or in telegram. But at any rate, thank you all so much for joining and please continue to support. We have huge announcements that are coming down the pipeline that everyone's asking about, that everyone asks us on a daily basis about. We have huge announcements that are coming down the pipe.

Acknowledgments and Future Engagement

I want to just give a shout out to some of our people. Apon, Steve Barnes, Punisher. Mike. Mike. These are some of our telegram and discord folks that have been supporting us from the beginning and many other ogs. My apologies. If I'm forgetting, if I've forgotten to mention you. there's so many that we appreciate you guys so much. We are here in telegram, we are here on Twitter, and we are ready to answer questions and to let you know how you can be a part of the project. So with that said, sayonara.

Closing Remarks

Thank you so much. Thank you so much for being a part of exhibits. And thank you so much to my team. Thank you to Miele, thank you to the executive team. And we look forward to deepening this conversation and to connecting with more of you. And again, thank you for your support. It's insurmountable, and we appreciate you. Stay. Love you.

Leave a Comment

Your email address will not be published. Required fields are marked *