Q&A
Highlights
Key Takeaways
Behind The Mic

Rate This post

Avg 0 / 5. Votes: 0

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

0
(0)

Share This Story, Choose Your Platform!

Space Summary

The Twitter Space SubQuery Founder AMA hosted by SubQueryNetwork. The SubQuery Founder AMA delved into the core aspects of pioneering web3 infrastructure development, highlighting scalability, ecosystem support, and efficiency in blockchain projects. With a focus on fast and flexible solutions, SubQuery's commitment to collaboration, community engagement, and innovation shines through. Supporting a vast array of blockchain networks, SubQuery ensures adaptability and relevance in the ever-evolving web3 space. The discussion underscored the crucial role of scalability, interoperability, and community-driven strategies in shaping the future of blockchain development.

For more spaces, visit the Infrastructure page.

Questions

Q: What is SubQuery's main focus in web3 infrastructure?
A: SubQuery emphasizes fast, flexible, and scalable infrastructure development for blockchain ecosystems.

Q: How many blockchain networks does SubQuery support?
A: SubQuery supports a wide range, including @ethereum, @base, @iotex_io, @Polkadot, @cosmos, @avax, @NEARProtocol & more.

Q: Why is scalability important for SubQuery?
A: Scalability is a priority to ensure efficiency and performance across diverse blockchain projects.

Q: How does SubQuery drive innovation in the blockchain space?
A: Through collaboration and ecosystem expansion, SubQuery empowers and supports innovative projects.

Q: What role does community engagement play in SubQuery's strategy?
A: Community engagement and shared visions are essential for SubQuery's approach to fostering blockchain innovation.

Q: Why is interoperability crucial in web3 development?
A: Interoperability ensures seamless interaction and wider support for various blockchain networks.

Q: How does SubQuery contribute to the growth of web3 ecosystems?
A: SubQuery's commitment to scalability and ecosystem support fosters growth and adaptability in the web3 space.

Q: What sets SubQuery apart from other infrastructure providers?
A: SubQuery's focus on speed, flexibility, and ecosystem diversity distinguishes it in the web3 infrastructure landscape.

Q: What benefits do blockchain teams gain from using SubQuery?
A: Blockchain teams benefit from SubQuery's efficient infrastructure, scalability features, and broad ecosystem support.

Q: How does SubQuery ensure relevance and impact in the evolving blockchain industry?
A: By expanding its ecosystem reach and fostering innovation, SubQuery remains relevant and impactful in the blockchain sector.

Highlights

Time: 00:12:45
Fast and Flexible Infrastructure SubQuery Founder emphasizes the importance of fast and flexible web3 infrastructure.

Time: 00:22:30
Ecosystem Support Across Networks Insights shared on supporting @ethereum, @Polkadot, @cosmos, @NEARProtocol & more blockchain networks.

Time: 00:33:15
Scalability in Web3 Development Discussion on the significance of scalability for efficient blockchain project management.

Time: 00:45:20
Community Collaboration for Innovation Highlighting the role of community engagement in driving blockchain innovation.

Time: 00:55:10
Interoperability for Adaptability Exploring how interoperability enhances adaptability in the web3 ecosystem.

Time: 01:05:55
Empowering Blockchain Teams Insights on how SubQuery empowers blockchain teams with scalable infrastructure.

Time: 01:15:40
Vision for Web3 Future Discussing SubQuery's vision for the future of blockchain development and infrastructure.

Time: 01:25:19
Innovative Solutions for Blockchain Projects Showcasing SubQuery's innovative solutions to address blockchain project needs.

Time: 01:35:02
Adaptability and Relevance in Changing Landscapes The importance of adapting and remaining relevant in the evolving blockchain industry.

Time: 01:45:30
Community-Centric Approach for Sustainability Emphasizing the sustainability of SubQuery's community-driven approach to blockchain infrastructure.

Key Takeaways

  • SubQuery focuses on fast and flexible web3 infrastructure development.
  • The platform supports a wide range of blockchain ecosystems and protocols.
  • Scalability and efficiency are key priorities for SubQuery Founder.
  • Insights shared on @ethereum, @base, @iotex_io, @Polkadot, @cosmos, @avax, @NEARProtocol & 200+ more.
  • SubQuery's approach aims to empower and drive innovation across diverse blockchain projects.
  • Collaboration with various blockchain networks ensures adaptability and broader support.
  • Ecosystem expansion strengthens SubQuery's impact and relevance in the web3 space.
  • The AMA session highlights SubQuery’s commitment to fostering blockchain innovation.
  • Community engagement and shared visions play a pivotal role in SubQuery's strategy.
  • The discussion emphasizes the importance of scalability and interoperability in web3 development.

Behind the Mic

Sadeena

Sadeena.

Sadeena

Sadeena.

Sadeena

Sadeena.

Welcome and Introduction

Welcome, everybody. Welcome, everybody. Good morning, good evening. I'm calling in from Berlin. We spent a couple of intensive days at Webfree Summit. It's been a hectic week for us. A lot of major developments that we shared with you. I know that you are following what's new with subquery. That's also why you are here, that we've launched the entire bold rebranding and also us being at Web three summit. So we have definitely a lot to talk about and that's why we thought it's a perfect occasion to connect with you and get you a chance to ask us some questions and also invite, of course, James to go in depth about what's new for subquery and also address the questions from you.

Agenda

So I think the best would be to first hear from James and then go through your questions. Thanks for filling in the form. A lot of questions are super interesting, super valid for us, so we were happy to gather them from you. So, as I said, we launched a new product, inference hosting for AI models, with a focus on large language models. And this perfectly aligns with our mission of the entire mission statement of subquery, to pioneer in a complete web free infrastructure revolution and to bring real decentralized services to web three space and power up builders. And we've been doing exactly that with indexing services now, RPC alternatives, and the next step, next goal for us is to power production ready and truly decentralized AI agents. So that's a big milestone for us.

Introduction of Changes

And James, would you like to tell us more about this big change? Yeah, no, it's great. And it's great being here. Marta, I think you've probably enjoyed as much as I. We've been absolutely summit. It's been a beautiful week here in Berlin and a great conference and a great place to announce these latest changes. So maybe we'll start with some history, right? Like last year in December, we finished being just an indexer. Previously, everyone thought about subquery as just a data indexer. And last year we took the first step outside of that by adding support for RPC's. And it kind of shows our view here that we're not just indexer, we're trying to be a decentralized infrastructure company that provides support for any infrastructure.

Vision and Direction

The goal here is that developers can build and decentralize the applications. And we've always been thinking about new options and new opportunities around what we can decentralize. And yesterday we of course, announced the next step, which is AI. Now, let's kind of give a. I wonder why AI is. And large AI and large language models like chat GPT are transforming the world. And I'm sure that everyone here has heard about AI. If you haven't, you've probably been living under a rock. It's taken the world by storm. They've trained on large data sets, and they produce human like outputs, texts, videos, images, making life easier.

Impact of AI

A lot of people use them to be more productive, to write stuff, to answer questions you might have. But like previous technological shifts, the centralization of this innovation is within one or two companies. Companies like OpenAI, who runs chat GPT, and Google, which runs Gemini. They have basically the entire market right now. And the issue here is that AI, even though it's quite innovative, is incredibly centralized. It is well beyond anything else in terms of centralization. And that is a massive issue for us. I think it should be a massive issue for us all.

Concerns Over Centralization

For example, I use chat GPT quite a lot, whether it's to help me summarize an article, or to write something, or to answer a question about something, or to convert or write some code. But chat GBT has the entire history of all my prompts to it. They have every private question I've asked it, and they can use that information to improve their closed source models, and they can use that information to spy on me. They pretty good understanding around what I'm interested in. You know, he joke about whenever I'm about to die, please delete my Internet browsing history. Well, I think we should update that to be also please delete my chat GPT prompt history.

AI Centralization Issues

So AI is just like the old space, the old era. It is even worse than the old era, where the centralization of this has meant some companies have a unbelievable view into private information about you or about your work or about your life. And that should be a cause for concern for all of us. And that's what we're trying to address. We are trying to provide a decentralized alternative. Now, let's be careful. There are two things here. There are inference and there are training.

Understanding AI Phases

These are two phases of AI. Now, training is the concept of teaching a model how to be a model. It is training a dataset is very computationally expensive. It's done on massive GPU's and data centers. It's really incredibly expensive and probably should be less left to the biggest companies out there. We're not focusing on training where other teams are doing that. We're focusing on the other side. And that's inference. Inference is taking an already trained model and putting it in a box and saying and asking questions to it, and using that existing model's training to answer questions that we give it.

Inference and Existing Services

Inference is basically making a training model production ready, and that's what we're focusing on now. There are existing inference services out there. For example, you can use AWS or GCP, or you can use a commercial service like chat GPT. But they have some high costs, they're expensive, they're very complex to set up. If you've ever run DevOps yourself, mlops is even more complex, even more tough. They also have a lot of vendor lock in. So all these companies like Google and AWS and Microsoft are trying to encourage you to use their own tools to make it easier. But that means that you're locked with them, you can't leave them, and you're only helping them get better and helping them build a more closed source feature.

Focus on Open Source Solutions

We think we need to focus on open source. Open source is the solution here. And even Google themselves last year released a report that said that there is no moat with AI, that open source can compete, if not exceed, closed source models. So we're about to help open source models, and what we're going to do is we're going to provide a decentralized network where you can run, and you can build, test, deploy and run intelligent applications on your own models using our network. What will that mean in purpose? For example, imagine your favorite wallet app. Your wallet app wants to build a custom AI that's an expert in everything about tokens, and they can answer questions about your wallet. So for example, you can ask questions like, what is this new token I have? Tell me more about it.

Use Cases for Decentralized Network

Tell me what its tokenomics is, or when is the next unlock schedule for tokens I have, or how would I best or cheapest conversion for this token on this network to this token on this other network. And the AI would have the information to do that. That would be trained. A team would train that, a wallet would train that, but they would need to run that model somewhere. And we're going to provide them the inference production service to allow them to run that model, and then allow an application that you use in your phone or on your desktop to query that model and to ask those questions to it. So it's all about building intelligent applications. Another example might be your favorite NFT marketplace. You might want to ask it. Hey, I really like this art.

Enhancement through AI Applications

Can you tell me any other artists that are quite similar, or are there any other artists work in previous kind of sets that they've done available for purchase? Those kind of questions be able to answer, or it might be for helping people. Let's say we want to educate people on Deepin or DeFi, and you might want to build an intelligent defi app that educates people on what certain perpetuals are or certain synthetic assets are. All those things are possible with AI, and we're going to make it possible to build and run those AI models in a high production and a decentralized way. And how we're going to do it, we're going to focus on open source. There's a lot of open source tools out there.

Implementation of Open Source Tools

Some of them are almost as good as the leading models today. We're going to do integrations of tools like hugging face, for example, to make it easy to leverage. We're going to use industry standard APIs, like OpenAI's standardized API for querying, which is what everyone uses. We're going to contribute back to the AI community to help them. And finally, we're going to build a decentralized but accessible payment model. Now, the way they pay for AI in the world is usually by tokens. Now this is very confusing. Tokens for AI is not the same as tokens like sub query tokens. They're different.

Understanding AI Tokenization

A token essentially means a part or a word. So when you send token, when you send words, tokens, AI inference provider, they charge you by the word, essentially. And we're going to do the same to make it easy to adopt. So we have to build a token or a word based pricing model. Now we're going to do this over the time I've released a roadmap, essentially the roadmap. To summarize, it starts with a demo that's available today. You can go to our website, and if you go to the top, there's an AI decentralized AI link and you can click on that and you can go and try out talking to a Lama model, which is pretty cool.

Roadmap and Future Hiring

But we're going to be releasing an in depth roadmap on Monday. And essentially it will include things like allowing anyone to deploy their own custom AI model to our network and allow people to use rag files, which is a technical term. I want to explain that here to add contextual data to any AI. Now, why can we do this better? Actually, before I answer that question, I might share Twitter spaces, Z code. I think it is AI revolution. AI revolution.

Advantages of Subquery

So AI revolution, all lowercase and no capitals AI revolution. So why can subquery do this better subquery we can express some issues posed by the current state of the decentralized or the AI market by using decentralized options and essentially by having a network of node operators, which we have today. And a lot of those node operators already have GPU's. And inference doesn't require big GPU's, they just require small ones that you run at home. You can do some GPU inference, some inference on CPU's, for example.

Decentralization Benefits

We can do this in a decentralized way, and that provides improvements around reliability. Any node operator anywhere in the world can serve this, can provide support, and it's unlikely that all of them go offline at one time. It has better performance and reliability because there are more nodes closer to users, they get faster response times from their queries. We're going to make it easy to access. So because the pricing models will be the same, it'll be simplified access, it'll be cost efficient, it'll be standardized industry and of course we're going to make it very flexible in terms of any model that you want to run on the network will be supported as long as it's open source.

User Empowerment in Model Training

Anyone can fine tune or train their own model. If you want to train a model that speaks in crypto chat language, you can do that. I don't know why you want to, but you can. And you can submit that to the network for people to run inference workloads on top. Now also, we just announced a new look. We updated our brand. You can see it on our website.

Introducing the New Brand

It's got a brand new logo and a bunch of new color schemes. I hope you like it. The flower and the root theme kind of symbolizes kind of focus on being an infrastructure company. The roots represent kind of how stable and deep. We kind of go into every application, every network around the space. And the flowers symbolize the diversity of the applications that build on top of subquery. So there's a whole bunch of new stuff. I hope you like it. We've been working on it for a while, and it's great to finally update the brand for everyone. I think that's what I'd like to talk about. Mata, what other questions we have?

Benefits of Subquery for LLM Users

Well, thanks, James. You touched upon a few questions already. The one that I wanted to start is what benefits does subquery bring to LLM's users? But you've touched on this. Is there anything you would like to add or summarize to answer this question? What benefits? I think the big one that people don't realize is that chat GPT has access to everything you look at by running a similar model like llama, which has been built by Facebook, but it's open source across a network. No one individual node operator has access to your chat history. And that goes a big way in terms of reducing the impact of if chat GPT got hacked and suddenly everyone managed to read everything you've ever sent to it's also massive security risks right there.

Security and Privacy Assurance

So I think that's a really big advantage that I'm looking forward to seeing is that I can be more confident that the queries and the prompts that I send to an AI running on the subquery network are not going to be viewed by or read by some other third party. They're not going to be sold to an advertiser, they're not going to be used or leaked at any point in the future. Yeah, that's definitely a solid one. Another question is, are we interested in diffusion models? I assume by diffusion models I mean like generative. Yeah, generative models. Yep. Yeah. Yeah. So we're focusing on text based LLMs at the moment. And the reason why is because they're a lot less heavy required to run.

Focus on Text-Based Models

So a text based model doesn't require as much. GPU. Diffusion models are very expensive. They're great. You can generate some pretty cool images of them, but they're very computationally expensive. Now, there's no reason why we can't support diffusion models in the future. Open our API that we use. This will work for that. And there is no limitation there. It's just we want to focus on text because we don't want to have node operators go out and buy, and have to buy a ton of GPUs to start with. We want them to start slowly and basically grow things organically over time. And so focusing on LLMs rather than diffusion models, even though there's no reason why we can't, means that it's easier for node operators.

Testing New AI Services

Now, again, there's nothing stopping you from deploying a diffusion model on the network when we open it up. Thanks for this. Another one, you've also touched on this a little bit, but our community is interested when users will be able to use this new AI service or get it tested. So there, you can try it out. Now, if you go to our website, subquery.net work, and at the top there's a section for AI and there's a link in that page to demo it, you can go try it out right now and you can talk to the model it's currently running. That model is running on node operators in our testnet. We're testing it right now internally.

Future Development Plans

And for next phase what we'll do is we'll be updating the node operator services so that they can start running models on their own machine and then they will release it to the main net by the moment. Right now you'll be able to go and test it out. Excellent. Do we plan to combine AI services with indexing? Yes, absolutely. So long term we want to build. There's a couple ways. One does. Firstly, we know that developers want help when they're indexing. And how cool would it be to say, hey, I just want to index every transaction related to this smart contract and I want it in this format for my application that's NFT marketplace.

AI-Assisted Indexing

And have the AI go out and generate an indexer for you. That would be pretty cool, but that's also very achievable. Another thing we're going to be doing is potentially allowing the AI to help you with querying a dataset. So rather than having to write GraphQL, you might be able to ask an AI and that AI can go and it has the context of the data that you have and can answer your questions in plain text. So there's a bunch of things like that we'll be working on integrating with indexer. Yeah, I think we're all bullish for this new change and for this product.

Community Excitement and Rebranding

I think it will bring a lot of value to builders and lower the entry level as well. That's exciting. We all are using AI in different ways to optimize what we are doing to be more efficient. So I think it's just the next step. Yeah, that's exciting. And now there are quite a few questions regarding our rebranding.

New Branding and Narratives

So the main one was can you give us a little bit more reasons behind the new branding and narratives? Yeah. So it's just always good to have a new brand. And we know that our developers, a team in our community, love the dark mode and we really wanted to dive into that. And we thought that the previous brand didn't quite relate, it was a bit too abstract. And we thought how can we visualize what we do for the web three ecosystem? And we really kind of like this idea of this, kind of this ecosystem that we help support. And there's no better way to visualize what supports an ecosystem is by looking at flowers and trees. Right. Roots. And that analogy really struck with us. So that's what we're kind of thinking behind the brand. We are the roots that support the web three ecosystem infrastructure is used everywhere and anywhere, and it helps power and provide nutrition to and support all these beautiful, different, diverse applications out there.

User Contributions to the Project

Absolutely. I also like the new branding and I think it really aligns with our mission and innovation that we are bringing to the space. We also have one more question regarding the brand change, and the question goes like that. When changing brands recognition, how will the project develop to give previous old users the best conditions to contribute to it? I'm sorry, what do you mean by that question? Sorry. So the question was how the users, current users followers, that's just how I understand it can contribute to the project in the context of this new brand or new product launch as well. Nothing changed. Nothing changes to the previous products. We're still supporting and working on the indexer and we're still supporting and working on the network. It's just expanded our vision. There's no new vision, for example, it's just we're adding a new product feature to our existing suite of products. And nothing changes with how people can contribute or build with nothing's changing with the indexer, nothing's changing with how the communities organize.

Node Operators and Rewards

Yeah, absolutely. I think we only expand it and we give more opportunities and more context, but nothing really changed in ways you can contribute and collaborate with us. Another question is related with node operators. So changing a little bit arias here, the question goes like that. So for node operators, running rpcs can cost thousands of dollars per month and how can we optimize the rewarding process so it brings more value incentive to node operators. Do we have any plans to do that? Yeah, we're laser focused right now on two things. So one of them is we've just updated the pricing model and what that means is we're going to be distributing more requests based on the competitive price that you have and the performance you do. And the aim there is that we send a lot more requests right now or we will send a lot more requests and therefore a lot more tokens to node operators that are better or faster or more performant. I put an article on the forum about that last week. I suggest you update your pricing and we'll be making every three days we'll be assessing and resetting who we're sending requests to.

Future Customer Expansion

The second thing is we're laser focused right now on adding more customers. And you can see we're starting to add a bunch more customers to the network. I added some noodle or noodle added some projects themselves yesterday and with adding more developers or more demand the system, we're planning to get a lot more traffic through the system and that will increase your rewards. We're well aware that the token price going down means that rewards or ultimate rewards come down and we're well aware of that. And it's something that we don't like as well. Every week we meet and discuss about what kind of expected rewards are, but we're working on that. Thanks. James. Question from another completely different area, context. How was the web three summit? Web three summit was awesome. Yeah, it's really cool in Berlin. It was a lovely venue, outskirts of town, and there's a lot of people in Berlin that are really into web three and crypto.

Thoughts on the Web Three Summit

A lot of very smart people base themselves here and it was great to be here and talk to all these amazing gigabrains here about all these ideas and topics and conversations. It was very fun. So we're very happy with it. We enjoyed our time. It was good to present and announce this live on stage and yeah, we'll probably be back again. Yeah, absolutely. I agree with everything what you said. I think it's what was also very unique for this event. It had a bit of philosophical tint to it. There were a lot of discussions which went above web three and addressed a lot of issues from societal to philosophical to technological. So that was very interesting space to be and share thoughts with leaders and also a great opportunity for us to join and join the stage and announce our new product.

Exchange Listings and Future Plans

And the last question is around any plans for listing on new exchanges, including centralized exchanges, big as binance, Coinbase. Is there any plan to do that? Yeah, we're constantly talking to new exchanges. We're trying to make sure that any exchanges we add to makes sense. And I know it's been a while since we added a new exchange, so stay tuned. Of course, as I say all the time, we can't ever dare say anything about our plans for exchanges because they have super and hyper careful about any announcements beforehand. So you'll hear it when it's time and we'll be sharing more about this soon, hopefully. Thank you, James. Those were the all questions we got, so we've addressed all of them. Thank you so much for joining and sharing a little bit more about this new development for subquery and what's next. Thank you very much for having me joining on this call.

Leave A Comment