Space Summary
The Twitter Space DILL October AMA hosted by dill_xyz_. The ?DILL October AMA Twitter space delved into the intricate world of scalable data availability networks, emphasizing their compatibility with the Danksharding roadmap. Discussions revolved around community engagement, technological advancements, and challenges faced in developing efficient data solutions. Key takeaways highlighted the importance of collaboration, transparency, and secure data handling mechanisms in the evolution of infrastructure networks. The event provided valuable insights into future trends and the decentralized nature of data management, underlining the significance of addressing challenges for sustainable growth.
For more spaces, visit the Infrastructure page.
Space Statistics
For more stats visit the full Live report
Questions
Q: How do data availability networks contribute to the Danksharding roadmap?
A: They provide the scalability and data handling capabilities required for Danksharding's implementation.
Q: Why is community engagement important for data availability networks?
A: Communities provide support, feedback, and adoption for these networks, driving growth and innovation.
Q: What challenges do developers face in creating scalable data solutions?
A: Interoperability, security, and efficiency are key challenges that developers encounter.
Q: What are the future prospects for data availability networks?
A: The future holds advancements in technology, increased adoption, and improved data handling mechanisms.
Q: How can collaboration enhance innovation in data network development?
A: By pooling expertise and resources, collaborations lead to more robust and efficient solutions.
Q: Why is transparency important in the development of data networks?
A: Transparency builds trust, fosters community participation, and ensures credibility in data network projects.
Q: What role does technology play in advancing data availability solutions?
A: Innovations like blockchain and AI drive the development of more efficient and secure data networks.
Q: How do scalable data networks contribute to the decentralization of data handling?
A: They enable distributed data storage and processing, reducing reliance on centralized systems.
Q: What impact can secure data handling mechanisms have on user trust?
A: Secure systems enhance user confidence, encourage adoption, and protect sensitive data.
Q: Why is it important to address challenges in building scalable data networks?
A: Solving challenges ensures the long-term viability and sustainability of data availability solutions.
Highlights
Time: 00:15:22
Importance of Scalable Data Networks Discussing how scalability is a cornerstone for effective data availability solutions.
Time: 00:30:45
Community Collaboration for Innovation Exploring how community feedback and collaboration drive innovation in data network development.
Time: 00:45:12
Transparency in Data Solutions Highlighting the role of transparency in building trust and credibility in data handling mechanisms.
Time: 01:02:30
Future Trends in Data Availability Exploring the emerging trends shaping the future of data availability networks.
Time: 01:20:15
Security and Efficiency in Data Handling Addressing the importance of secure and efficient data management in network development.
Time: 01:35:50
Innovative Technologies Driving Data Networks Discussing how technologies like blockchain and AI propel advancements in data availability solutions.
Time: 01:50:01
Decentralization Through Data Networks Examining how scalable data networks contribute to decentralizing data handling processes.
Time: 02:10:48
Challenges and Opportunities in Data Solutions Analyzing the current challenges and future opportunities in building scalable data networks.
Time: 02:25:19
Trust and Transparency in the Community Emphasizing the significance of trust and transparency in fostering community engagement.
Time: 02:45:05
Collaborative Strategies for Network Development Exploring the benefits of collaboration in creating robust and innovative data availability networks.
Key Takeaways
- Scalable data availability networks play a crucial role in the Danksharding roadmap.
- Community engagement is vital for the success and growth of data availability networks.
- Understanding the compatibility between different networks is essential for scalability.
- DILL October AMA sheds light on the importance of decentralized data solutions.
- Collaboration among communities and projects fosters innovation in data network development.
- The AMA emphasizes the need for secure and efficient data handling mechanisms.
- Exploring new technologies and strategies is key in advancing data availability solutions.
- Learning about the challenges and opportunities in building scalable data networks.
- Insights on the future trends and developments in the data availability space.
- The role of transparency and open communication in building trust within the community.
Behind the Mic
Introduction to the AMA Session
Good day everyone. Welcome to our October ama session with Dill. It's me again, Martin, your host for today's exciting event. Dill, as we know, is the most scalable data availability network. Whether you're here today for the first time or you're an old friend of Dill, we're thrilled to have you here today. There will be two prize draws during October AMA, one for our discord participants and one for those on Twitter. So make sure you stick around. The prize claim window is short. Winners will need to DM Dil's official Twitter account or open a ticket on discord within 20 minutes after the AMA ends to claim their prize. If you miss the deadline, you'll lose your chance. So again, please stick around. Today Ted, the co founder of Dill, will be addressing and discussing the questions that we have here today for him from the community. We have received a lot of questions for the ones we don't have time to go through. We will do it next time. So get ready to dive deep into some fascinating insights. Now please join me in welcoming Ted.
Question on Metrics and Indicators
How are you doing today Ted? Good, glad to hear that. Let's jump right in. So the first question I have for you today is what specific metrics or success indicators are you tracking during this testnet phase to ensure dill is ready for mainnet and how will you determine when the network is mature enough for a full scale launch? Yeah, that's so we received this question. This is a commonly asked question and I think it's very important for us to address this and make sure that the community has understanding of the whole incentivized testnet process and what's the prerequisite and how can we get prepared for the full scale many launch. So first of all, I would like to thank everyone who has been joining the incentive testnet. Without your help, we wouldn't be able to scale the network in this phase to a point. We are confident enough to launch for the next phase. And in general, I think there are a few metrics we're looking into to make sure that the network is behaving as we expected and also the performance is a part of our expectations.
Scalability and Performance Metrics
So first of all, and most importantly is the scalability as steel is featuring for the most scalable data availability network. At the moment, we're actually ten to 100 x of our competitors even in this phase alone, compared to some other DA solutions like DNA networks, which has already been on the mainnet. So the goal for this phase or the goal for the incentivized testnet is to scale up to 12,000, close to 13,000 nodes. And that's basically if you look at some of the other testnets, most of them are sorry, if you look at some of the other DA networks most of them are actually operating on hundreds scale. So this is essentially like 100 x, like sorry. Yeah, ten to 100 x of our competitors. So one of the major technical pillars for this to success is sharding. Basically we have two mechanism in the sharding. One is the data sharding. Essentially that means for a piece of the data that's being ingested into the network will be.
Data Sharding and Network Structure
So first of all it will be coded into some, if it comes in as an n by n data matrix then it will be encoded into two NDU by two n matrix. And then later you will be divided based on the we shard. That's the data shard part. And second part is the network chart which basically in each of the subnets will have a subsets of validators to make sure that the data is correct and also propagating the data in the subnets as well. So the scalability is the most important part that we're looking to and the sharding is the technology that's backing this to happen. And essentially there are a few things we're looking into, whether the data is actually being encoded correctly and also the efficiency of it. That's the first part. And second part is we are also performing some of the stress tests to make sure that during some very extreme scenarios our network are still stable.
Network Performance Metrics
And then the network jitter as well as the latency is actually manageable. That's, that's, that. And second important metric we're looking into is the throughput we target to have our throughput to be ten megabytes per second. This is like ten x of our, again ten x of our competitors, like even DA, which is a DAC data availability community which the data is actually holding off chain, they are basically performing at this scale and then but we are a layer one network. So it's very impressive to have this performance level on the throughput part. And the third part is the network liveness and the state stability. So this is the reason why we require us to run eight phases of the incentivized testnet to make sure that during long period of time that the testnet has been or the whole infrastructure has been battle tested in a longer period of time.
Ensuring Stability and Reliability
Just to make sure that the network liveness as well as the scalability, sorry, stability has been tested through a longer time. So that's in general, we have the indicators we're looking into to make sure that the testnet is actually achieving what we want to achieve and get enough metrics as well as the indicators as the evidence for us to feel confident to launch the mainnet as well as to get our partners be confident about the deal. Network in general, and as you may already know, that we have multiple components in the network and obviously some peripheral components as well. Outside the network alone, there are two roles in the network. One is the full node and one is the live validation node. Both are actually part of the consensus network and the core do network.
Node Requirements and Performance
So the full node actually requires much higher, I would say computation power as well as the network bandwidth to be able to synchronize and propagating the data for the block and also producing the block for each of the slot. So one thing we're looking into is how the full node actually can produce in the block. Even during a very high transaction, TPS cases it can still produce the block as well as synchronizing the block within the time within the shard. So this is where I think very specific for the full node and for the live validation node it does the DAs, as some of you have already running the DS, the validation node in the office network. So there are a few things.
Validating and Monitoring Nodes
One is how does it perform on the DS side to make sure that it also verifies the data is correct for that particular coordinates. It's sampling. And the second part is the DHT network, which is the sort of the auxiliary network that helps the light validation node to be able to fast look up the sample data we're to look at. And as well, it also keeps some redundant data in the network to make sure that just in case there's something happened, there's still a way for us to recover the data. In general, these are very specific to the light validator node and the full node. And on top of that we also have some peripheral components.
Improvements and Upgrades
So for instance the blockchain explorer, that's where the validator and the stakers be able to look at their status of the validator and also the network status. We'll also be improving that so that I think we have received quite a lot of the feedbacks from the previous public testnet, the ND's testnet, where people have been complaining that they do not know how to like what's the reward has been earned since they started staking so we have been making some like UI changes into the, into the explorer so that people have better idea and it becomes more intuitive on what's the status of your node and so that you can take actions if something goes wrong.
Future Enhancements and Closure
And also Beverly, the query will better. And on the two set side, as you may already noticed, if you have been running the testnet node in both Andis and outputs, then you probably already noticed that the script for us, for you to actually spin up a node, it becomes much easier. And by default you don't even have to do any input. Basically everything should be working out of box. We've also been working on some of the features where allowing people to withdraw the tokens and we'll have some updates on that as well. I think these are the major parts we're looking at and these are the major metrics and the indicators we're closely monitoring at this stage. And a lot of them is actually happening behind the scene where we're actually monitoring some of the logs by ourselves as well.
Congratulatory Remarks and Community Acknowledgment
And congratulations on such a successful testnet. You've hit a lot of incredible milestones along the way. So thank you for sharing the results and also thank you to the dill community for their enthusiasm and support, because these milestones wouldn't have been possible without them.
Integration and Migration Queries
The next question I have for you is how does Dill integrate with existing Ethereum based roll ups and what are the steps for developers looking to migrate or build on top of Dill's modular architecture? Yeah, this is actually a very good question. And I think this, whoever actually asks this question do have some in depth understanding, I guess, of the Ethereum and the modular architecture overall. Yeah. So just to summarize. So first of all, deal is actually compatible with the Ethereum existing Ethereum roll up. So the change for any roll up that is already built on top of the Ethereum, whether you are using the core data which is sort of the quote unquote legacy field for the transaction l two transaction data to be placed, or you're using the blob or blob carry transactions, which is post upgrade.
Minimal Changes For Existing Rollups
Either way, those changes are going to be very minimal for the existing roll ups to switch to deal once we are going live. Let me just underscore that the deal is compatible with the Ethereum. The change required for the network is going to be very minimal. That's the first thing. Second thing is obviously the roll ups are actually, okay, so the roll ups nowadays, if you are familiar in this space, most of the routes is actually, people wouldn't like write their own code from scratch, right? So what people will do is normally they will use a roll off framework or slash SDK to spin up their roll off.
Available Frameworks and Tools
And on the market there are quite few options for you to choose from. Depends on whether you are using. And obviously there are other considerations other than just the technical stack, but also obviously the ecosystem, your partner as well as I guess the liquidity side of it as well. So everything combined, there are few popular options for people to spin up a roll up. So one is op stack, and there's also arbitram nitro, and there's also Polygon, CDK and Zkvmdev and etcetera. I think most of them, I think most of the traffic, if you look at the distribution of the, I guess the current popular roll ups on the l two bits or any other dashboard, or at least 80% on top of my head, the number might be slightly off, but on top of my head, I think 80%, at least 80% to 90% of the roll ups are actually built by the op stack and arbitram nitro.
Ensuring Compatibility and Standards
So it's very important for us to make sure that the, is actually compatible with those two roll up frameworks. And the good news is that we so for the op stack, opstack, actually, they have defined a unified interface for the alternative DA. So that as long as your Daev follows that standard and or I guess, specification, then any other chain or developer use op stack will be able to use your DA with some configuration and you are good to go. So we're working on that to make sure that our deal, like the deal code and the interface, et cetera, is actually following that specification. And so that it should be working out of box.
Collaboration and Certification
And we're actually getting in touch with the op ecosystem to make sure that we also get a sort of, I would say I'm not sure if the survivor, but certified or endorsed by their ecosystem. Right. On the other hand, arbitram nitro, at the moment it doesn't have a alternative DA standard for the, for other DA projects to follow. So that the current practice is that you have to keep a, a four of their code base and then you basically make modification on top of it and then you ship basically your version of arbitrage nitro that's compatible with the deal.
Maintaining Upstream Changes and Collaboration
And then you have to, the thing is that you have to basically keep track of the upstream changes in their repository so that any change happening over there, you have to port it manually and bring it into your forecast role. So we're working on both. And so by that we'll hopefully be able to capture like 80% or 90% of it. And also on the polygon CDK side, we are already compatible with it. And there's actually one testable lab actually using that polygon CDK and then starting to integrate it with Dio.
Partnership and Ecosystem Integration
So from that perspective we're covered on the other aspect, which is the rust system, the roll up as a services platform such as Conduit, Kudara, Allayer and few others. So we are working with them and as you may already notice that we have already announced the collaboration with Kadara and the deal has already been listed as their ecosystem partner. And once we have, you know, the, once we're ready for the, for the, once we really go live, then we'll be working with them to make sure that the deal is also listed as 1d option in their portal.
Developer Tooling and Effortless Experience
So that the developer, once developer wants to launch a roll up, then deal will be one of the options in the Da configuration alongside with other obviously alternative DA solutions such as sales avail, Eknda, etcetera. So we'll be working very closely with both, I guess the roll up frameworks as well as the ROS system to make sure that the experience for the roll ups to use DOA stimulus and effortless, that's our goal. And we want to make sure that the friction over there is as little as possible.
Community Engagement and Feedback
And if there's any suggestions or feedbacks from them, we'll make sure that our top priority to fix them and then make sure the experience is top tier. That's really cool and interesting. I love learning more about Dell every time we have these AMA's. There's great questions from the community what security measures are in place to protect the Del main blockchain from vulnerabilities and to ensure its long term stability.
Security Measures Overview
Yeah, I think this is actually a sort of, I guess, more general question from that perspective. It's not. I think every blockchain, like especially for those, I guess, defi protocols, it's more important. But I think synchronicity is always the top priority for any crypto or blockchain projects. So I think there are multiple ways we can look at it. But then I think there's already a pretty well known good best practice for security, especially for the layer one or blockchain infrastructure project.
Security Protocols and Open Source Commitment
So obviously we're going to open source the do code onto the GitHub once we are ready for it. At the moment we didn't do so because the code base is still being changed very frequently. And then I think we just want to make sure that we tied up the code base before we launch. And then we'll also do an internal security review within our teams just to make sure that the review, obviously during the development we do co review, but also I think we'll have a path of security focused review on some of the key component, on the staking withdrawal as well as consensus.
Continuous Efforts for Code Quality
And obviously the innovative part of the sharding part will make sure that those ones being covered. So that's our first level of security review within our team, some dedicated effort on that. And then once the code is on GitHub, obviously we'll invite all the developers in our community as well as some people to look at it. And then at some point we'll probably do a bug bounty program to make sure that the right hand will look into it and report to the project for any noticeable critical bugs that they have noticed.
Launch Best Practices and Community Involvement
And then there's going to be a continuous effort on the code base because we'll be also updating the code continuously even after the blockchain, sorry, the domain name is launched. So this is not just one time effort, but also it's going to be a continuous effort to make sure that one, the code is actually transparent and we want to make sure that this is open sourced and then everyone is confident and know what's going on there and we're not doing something behind the scene. That's first thing.
Collaborative Security Approach
The second thing is we also want to make sure that the effort is not just by us, it will be a joint effort from us, from the community, from the security researchers, from the right hand, and the list goes on and make sure that this is going to be like being scrutinized by all people. And then in this way we're. I think this is best effort and then. Sorry, this is the best practice and then we should be covered from that perspective on the security.
Security Prioritization and Assurance
But again, security is. Nobody can guarantee that everything is fine, so we're trying to do our best over there, but again, this is going to be our priority. There's no question around that. Yeah. Yes. No, security is of the utmost importance. So it's good to hear that there'll be continuous efforts to ensure security and reviewing it on dill network frequently to ensure that it stays, you know, hardened.
Transition to Prize Draw
So next, we're going to jump into our first prize draw. And after a round of Q and A, it's time for the first prize draw. We're starting with discord participants who ask questions. So first we'll start off with the most valuable question. And that award goes to. Helsunsoli and apathea eight. My apologies if I don't say your usernames correctly.
Winners Announcement
The lucky five question winners, our Hossein czar, Ngu Tang, KVC, Bas Nobiz, Washi 0605 and Shah Dab. Huge congrats to all seven of you.
Prize Announcement and Instructions
You've each won a light validator to claim your prize. Be sure to open a ticket on discord within 20 minutes after the AMA ends. If you miss the deadline, you won't be able to claim it. So again, make sure to open a ticket on discord within 20 minutes after the AMA ends to claim your prize. To those who didn't win, don't be disappointed or go anywhere just yet. We'll have one more prize draw at the end of the AMA where we'll announce the winners from the activity on Twitter. So stay tuned and keep your fingers crossed. Let's jump back to the Q and A.
Sustaining the Project
The next question I have for you, Ted, is beyond hype and speculation. How will dil sustain itself? What's the long term business model and how will the project generate revenue while remaining decentralized? Yeah, I think this is a good question. This is also a good question. We have so many questions this session, America. So, yeah, I think especially during the current market condition, I think there are a lot of hypes and speculation, but I think the most important thing for us at this stage is to make sure that, one, we are building, and two, there's a sustainable business model as well as, I would say, the positive feedback loop, the deal itself and also the network.
Revenue Sources and Future Outlook
So obviously, the most important source of income for any DA is going to be the DAP that's basically generated by whoever is using the deal as the data availability network, it can be like roll ups. Most of them are roll ups or app chain. And in our opinion, I think recently, after some retrospection, I think we believe like the future is going to be the app chain ecosystem, where the high transaction or user very strong, with very strong user based community will be likely to launch their own app chain so that they can capture the gas value as well as create their own ecosystem and also have the autonomy on the, I guess the sequencing as well as the mev values and et cetera. That is, goes on. Goes on. And then. So I think with that view, I think the potential for the app chain is going to be very huge.
Decentralization and Community Engagement
And one of the very critical components for this to happen is actually the DA layer. And then with that, I think the opportunity presented for DA in the future is very, is huge. And then today we are still like very early on this game. So with that, there are a few other things on top of the dafs that we can capture. One is interoperability. As you may already know that one of the big reasons why people still prefer or have a doubt to whether launch their app chain versus like the or use ethereum in general. I'm just using that right, is the interoperability as well as the liquidity and assets, the easy access to those two. If you can fix those two things very easily, natively and securely, then you have a very good value proposition to the roll up or appchain owners to use your DA.
Network Effect and Security
And then this will create a network effects where by choosing a DA that's natively integrated with the rest of the app chain ecosystem, you get automatic access to those interop and the liquidity. This is very attractive. And then with that you have more, I guess you have more app chance getting into it. And then it becomes a positive feedback loop to your network and ecosystem in general. And this will in turn recruit more values to the DfE, which will be, which will raise the security level of the network. Because the security is basically economical value plus the decentralization of it. So with higher token value than the decentralization, sorry, that the security of the whole DNA network becomes higher and this will become a plus for anyone to consider to using it.
Community Growth and Token Value
And again, it will attract more app user, sorry, the app chain owners projects to use your DAEV. So on that once the token price goes up and then we will automatically attract more, I guess the token holders as well as the solo stakers. And one unique thing about Dio is that we have this very innovative and first in kind, literally first in kind incentivized das light validation node where as you, a lot of you have already experienced, you can spin up a light validator node which is performing the das and also participating in the consensus voting phases in as low as few minutes. And then with some, with very little help from, with very little experience in the technical side, you can also, you know, spin up a note very quickly and the very cheaply you can spend like it costs only like $5 or $6 per month on a moderate machine on a VPS provider and then you are good to go.
Technical Advancements and Block Size
So with that, with more I guess, solo stakers coming to the network, then the decentralization of the network also increases and this will in turn brings off the das community. And what this does in technically speaking, it will allow us to increase the block size. And what does it mean for the app user then? That basically means our block size increases, the throughput goes up and then the cost goes down. So this will also create a huge benefit for the app chain owners to use DA. With everything combined, we have this ecosystem positive feedback loop on the interoperability as well as the liquidity access.
Sustainable Ecosystem Development
That's one thing. And second thing is, again the DS. The DS part is going to be, we're trying to look at whether there's a way for us to, you know, allow people to do it just within the browser or some other, you know, commodity machines like your, even like your phone or some other place where it doesn't require you to buy a vps in the round node to participate. So it will be much easier for those people to contributing to a network. And also because we are also working with some of the potential adopters of the Doda network. Obviously those guys are some game or some like those projects are retail facing so that they have a lot of actual use case, whether it's game social or some other activities.
Collaboration with Validators
We'll see. We're trying to see if there's a way we can not, we can engage both deal networks as well as those projects community together to contributing to not just the deal, but also the two projects combined. So there's a lot of things we're trying to. There's a.
Acknowledging Community Feedback
So let me put it this way. We are aware of this problem and we have heard a lot of feedback from the old boys from the community, and we are taking action to lower the barrier for non technical savvy users and allow them to participate and contribute to the deal network.
Scalability Claims and Measures
Very interesting. Given Dil's claims of up to ten to 100 times scalability improvement, how does Dil empirically measure and verify these performance improvements? What benchmarking methods are used and how does Dill plan to maintain the scalability over the long term?
Performance Metrics Discussion
Yeah, I think this question is probably, there's some overlapping with the first question. So I probably just some of them I think I already covered in the first question, but I think there are more. This question is probably going to be a little bit more technical, so just bear with me. So on this part, most of the, I think the performance improvements, there are a few things we're looking at.
Benchmarking and Throughput
One is the benchmark, the number of blobs. The blobs is actually the, you can view it as the vehicle or the data structure that holds the data being ingested into the network. So we're trying to benchmark the number of blobs to be included in one single block. The higher the number of the blobs, the higher the throughput or the total number or total size of the data can be included in one single block. So that's very important metric that we're looking into.
Throughput and Cost Relation
And then this directly relates to the throughput as well as your cost of the DA. So this is the number one thing that we're looking at that can have a determined factor of those two other two metrics we're looking at. So theoretically, and the second question is how do we maintain this scalability and just coming back to the empirical measure.
Stress Testing and Monitoring
So we are actually conducting some of the, we're conducting some performance testing or stress test behind the scene by our development team that we are pumping into the data into the network. So that, and we're looking, while pumping into the data into the network, we're also monitoring few, I guess some of the metrics that I mentioned earlier, like latency jitter, as well as how the node performance is on the bandwidth, as well as the time to synchronize or propagating those data in that network.
Key Metrics for Performance
There are a lot of things we're measuring, but the most important thing is like the latency, how quickly we can run the encoding and how quickly the block is being produced by the selected validator in the slot. And we are also looking into the, like the sharding because at the moment we are also like, there's a cross shard rotations from time to make sure that the data, sorry.
Ensuring Robustness Through Sharding
To make sure that the subnet is robust. And there is, and there, it's also a means to prevent collude within a shard so that it brings extra resiliency as well as I guess, security to the overall network. And then theoretically, because we're using sharding architecture, and so the more data goes into the network, it actually just doesn't change like that much in one for shard.
Advantages of Sharding Architecture
And that's the beauty of, you know, using sharding as the, as the technology to power this because it basically means that to some degree that the network doesn't, the network performance doesn't degrade with the number of, you know, blobs increased and the rather because it's using sharp, because we also have this Das and with more decentralization then it basically means we still have the confidence to after.
High Performance with Increased Block Size
So let me put it this way. So with the DS, with the number of the DS increased, we still have the confidence to increase the block size. Then the scalability will still remain high performance while not sacrificing the liveness as well as the decentralization of the network. I know there's a lot of technical stuff over there, but I tried to, you know, explain it so that this, is clear to whoever is asking this question.
Acknowledging Efforts in Scalability
Yeah, no, your answers are always very in depth and provide a lot of detail. And in this case, you know, it's amazing how much you've improved scalability. So hats off to your team for building this. What's the next step?
Addressing Network Bottlenecks
Yes, just one more thing to add over there. And then this is not just cliche, right? And to be, and just to give you some idea here, like we have observed during the stress test, we have been observed there's some bottleneck on the bandwidth, like the network bandwidth. So what does it mean? There are two options.
Options to Address Bandwidth Issues
One is we'll have to increase the requirement for the full node so that for the recommended requirements of the full node, then we can either increase the bandwidth so that the bandwidth can keep up with the speed or the throughput so that it can, basically the data can be propagated throughout the network and synchronized among all the nodes within the slot so that it will not miss the deadline.
Ideal Solutions for Node Requirements
That's one option. But then obviously that's not going to be ideal because it will be. Basically this means we'll be transferring the cost into the, whoever, the validator operators or solo stakers. That's not ideal. And this will increase the, I guess raise the bar for anyone to technical bar at least, and obviously capital bar for anyone to run the deal.
Engineering Optimization Requirements
And this is not ideal. So we've been, in the last couple of weeks, we've been like, we're still doing this, but it requires some engineering optimization and big changes to, you know, to make sure that the synchronization of those data can be synchronized within the propagation time deadline without having to change the bandwidth requirement for the nodes.
Focus on Full Node Performance
So we've been very, working very hard to make sure that the, you know, the full node is performance as well as the, we're not changing, we're not transferring the cost to the community and our partners. Yeah.
Next Steps for Testnet
Perfect. I just have one last question for you, Ted. What's the next step after the full light validator testnet is over? Will there be more tasks to do or anything else users should be waiting for to contribute and deal? Testnet or mainnet?
Current Focus on Incentivized Testnet
Yeah, I think for now our focus is obviously to make sure that the incentivized testnet is success. And we've been achieving the benchmarks as well as the, you know, the network scale we want to achieve and designed to this space.
Future Plans and Roadmap
So at the moment I think the only thing that we know for sure is to make sure that in the next couple weeks or actually in the next remaining like seven or, sorry, six spaces. Five, six spaces, we'll be very focused on the deal. A lot like the incentive as test alone, but we are also crafting our roadmap for 2025 and beyond and also the go to market plan for do and together with our partners.
Community Engagement and Updates
So I'm sure there will be more to come in terms of the task, not only just for technical bit, but also there will be more engagement from the community. So please stay tuned for that.
Prize Draw Announcement
Amazing. Thanks for that detail. Well, all of your answers were detailed, so thank you. Ted for answering all those questions. Now we're heading into the most exciting part of the AMA, the second prize draw.
Twitter Event Winners
This one's for our Twitter event. And the winners are. Again, my apologies if I pronounce your username incorrectly. Luffy, mume, pussinboots, fuel Zero X, Oracle Fuel, and Mister person.
Prize Claim Instructions
All of you are the winners from Twitter. And congratulations. Each of you will receive a light validator. Remember, to claim your prize, send us a DM to the official dill network account on Twitter within 20 minutes after the AMA ends.
Final Reminders
Don't miss out. If you don't claim it in time, you won't be able to get your prize. So remember, again, if you won the Twitter prize and your name was called, make sure to dm the official dill network Twitter account within 20 minutes after the AMA ends.
Thanks and Farewell
Thank you everyone that attended the show today. Thank you Ted and Dill network for your time. As always, I learned a lot, I had fun, and I hope everyone else did as well. So until next time, I hope everybody has an amazing day and take care and talk soon.
Conclusion of the AMA
Ted yeah, thanks, Martin. Thanks everyone for tuning in. This is really a good opportunity for me and the whole deal team to, you know, answer all the questions you guys have and make sure that we stay close with you guys.