Q&A
Highlights
Key Takeaways
Behind The Mic

Rate This post

Avg 0 / 5. Votes: 0

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

0
(0)

Share This Story, Choose Your Platform!

Space Summary

The Twitter Space Primary Markets Endgame hosted by Cointelegraph. Primary Markets Endgame delved into the pivotal role of primary markets, blockchain technology, and digital assets in reshaping the financial industry. Experts discussed the transformative impact of blockchain on traditional financial systems, strategies for successful ICOs, and the regulatory landscape influencing blockchain initiatives. The conversation highlighted the significance of DeFi in revolutionizing primary markets and the challenges and opportunities in this dynamic ecosystem.

For more spaces, visit the DeFi page.

Questions

Q: Why are primary markets essential for ICOs and fundraising?
A: Primary markets provide a platform for companies to raise capital through ICOs, facilitating growth and innovation.

Q: How does blockchain disrupt traditional financial systems?
A: Blockchain introduces transparency, security, and efficiency into financial transactions, transforming traditional processes.

Q: What gives Bitcoin and Ethereum their market dominance?
A: Bitcoin and Ethereum's longevity, network effects, and technological advancements contribute to their market leadership.

Q: What role does DeFi play in reshaping primary markets?
A: DeFi enables decentralized financial services, enhancing accessibility and efficiency in primary market transactions.

Q: Why is regulatory compliance crucial for blockchain initiatives?
A: Regulatory compliance ensures consumer protection, market stability, and legitimacy in blockchain projects.

Highlights

Time: 09:15:20
ICO Strategies and Primary Market Dynamics Insights on effective ICO strategies and the dynamics of primary market investments.

Time: 10:25:47
Blockchain Innovation and Financial Transformation Exploring the innovative applications of blockchain in transforming financial systems.

Time: 11:35:12
DeFi Impact on Traditional Markets Analyzing how DeFi influences traditional market structures and investment practices.

Key Takeaways

  • Primary markets are crucial for initial coin offerings (ICOs) and fundraising.
  • Blockchain technology revolutionizes traditional financial systems.
  • Bitcoin and Ethereum remain dominant in the cryptocurrency market.
  • In-depth analysis of emerging trends in blockchain and digital assets.
  • Insightful discussions on the impact of decentralized finance (DeFi) on primary markets.
  • The importance of regulatory compliance and market stability in blockchain initiatives.
  • Industry experts share valuable perspectives on the future of primary market investments.
  • Strategies for navigating the evolving landscape of blockchain investments.
  • Exploration of the intersection between blockchain technology and traditional financial sectors.
  • Insights on key challenges and opportunities in the primary market ecosystem.

Behind the Mic

Introduction to the Discovery Episode

Hello. Hello, everyone. Thank you so much for tuning in for another discovery episode. And this time we are with Egan on a project on Cordana. We didn't have a lot of projects on Cordana lately. We had Nouvella the other day. But I think this is the second project we are covering on Cordana, which is very interesting, very cool, actually, to see, because, to be fair, Cordana was a bit of a blind spot for us for a really long time. Not necessarily because of it's bad or not or something like that. It's just there's so much you can cover. There's so much going on in crypto. And unfortunately, another blind spot for us, which a lot of people call us crazy for, is Solana. It's not necessarily we are not looking into it or whatever, but it's not something we are actively promoting or actively looking into, not necessarily for any reason.

Introduction of Speakers

Today, we actually, with two speakers, which is very cool. Initially, there was only one speaker, but I see there's two now, which is lovely. The more the better, actually. So let's start with the introduction of both of you, and from there on, we can go into more about the product you are creating, and we can go into depth in that. I might butcher your name, but I'm gonna do my best today. We would nav cheats. I think I'm doing. I think the pronunciation is somewhat correct. I don't know, but that's really good. Well, it's pretty good. Okay. He's the CEO of igon, which is, I think, the most important thing to know. If you could give us a bit of an introduction about yourself. How did you get into web three? And more importantly, why did you start igon?

Speaker Introduction and Background

Yeah, sure. So I'm the founder of Igon and actually got into the web three in around 2013 when I read the bitcoin paper, you know, more so about financial freedom and things like that. So I really got into crypto a little bit after that, around 2016. So that was the first research, really getting into bitcoin and how blockchain could be applied in different various ways. And 2016 and 17 is when we first kind of approached the idea of igon. First, me being from a healthcare background, in terms of the healthcare industry, I really wanted to protect patient files, patient records, and patient data, and privacy and compliance was important for me. So I wanted to imply that with blockchain technology, use that as a kind of a privacy security access layer for medical data.

The Evolution of Igon

And that's when the first kind of idea came, became icon. First it was called Juno, and then it translated into igon as we kind of redid the modeling and rethought all the whole process with the co founders. We have four founders and two of them in the US. Doctor Rohit Gupta being one of them. He wrote his thesis back in 2005 in regards to decentralized compute and related a lot to what were trying to accomplish. So after that we transformed into a generic cloud service, decentralized cloud service. So we're trying to basically build a Airbnb for storage and compute, and that's what we're working on now. Awesome.

Interesting Background and Perspective

I think that's actually a really interesting introduction, specifically how you actually wanted to solve a real problem. Because what I've heard a lot of times is that these sort of patient fails, specifically of celebrities, right? They get seen quite a bit. I've seen it quite. I'm from the Netherlands, by the way, and we have definitely a few cases whereby celebrities were in a hospital and all of a sudden a portion of their patient fell was online, which is obviously not something you really want. It's kind of wild, to be fair. Obviously the people which do it, they get punished and maybe get fired. But then the damage is already done, right? The patient is online, or a part of it is online.

Introduction of the Second Speaker

The other speaker which we have today is Brock. I don't know your name, unfortunately, so I will call you Brock, but I'm more as happy to call you by your name, unfortunately, I don't know it yet. That being said, please give you an introduction about yourself. How did you get into web three? And more importantly, what did you bring to Eigen? Hey, you might want to sit down for this, but my name is just Brock. I'm the Brock of Cardano, so it's just easy to go by Brock. Yeah, I'm the product manager at Igon, so I've been working with the company since the earlier part of 2024, expanding out the storage infrastructure, kickstarting the compute infrastructure, and most recently, of course, our hardware venture into releasing Cyclone, which is our purpose built hardware nodes.

Brock's Background

Tact is kind of an entry point for people interested in operating a node, but who don't already have their own, you know, server equipment or storage equipment, anything like that. But my background is in tv and film. I work in Toronto's advertising post production industry full time, and recently ventured into running my own operation instead of working for a studio. So with that time opened up, I was interested in getting onto the icon team and focusing on cloud infrastructure, because cloud storage is a very useful and important part of my industry and also simultaneously operating large storage systems is part of it too, because the raw files from cinema cameras, reds, Arris are massive.

Experience in Data Management

So I'm used to dealing with large amounts of data and having to store them both locally and in the cloud, file transferring solutions, all that stuff. That icon seeks to decentralize, increase security and increase user experience overall. That's where my expertise is. So yeah, awesome. Happy to be here and chat about it. That is awesome. It's always interesting to hear where people actually came from before they get into web three. You are the first one which comes out from the television realm. Of all the 52 space, it's a bit more, I would say 60. I've never had someone out of the, let's say, entertainment television world.

Discussion of Data Privacy

Most of people obviously get from finance, healthcare, definitely a few people through most of the times when people come out of the healthcare industry, it's indeed mostly because they have seen how much data is being handled and most of them are relatively negative about it considering it's very privacy sensitive. But that's very cool story, very cool background just for the audience. Sometimes I will ask questions which they might already have given me answer on. The reason why I do this is because obviously not everyone is able to listen to their spaces. So we will create audio bits later on, specific bits for the web three community to listen to very small, like two or three minutes, basically to get the word out about icon in a really easy to understand way so that we can provide audio content next to our aesthetic content, which is already out there.

Overview of Igon’s Solutions

So this question is already partly being answered, but we would love to have an audio clip on it. So could you give me a brief overview on basically what sort of problem Eigen is trying to solve, because you were talking about. Well, we have one part which is the storage components, but you guys are also heading into the compute industry, which is obviously the computer industry is very hot right now. Specifically, deep in projects in general are very popular right now. So if you could give us a bit of an idea there, that would be awesome also for the audience to get some more context about you guys. Yeah, sure. I mean, there's a few problems in the centralized world in terms of storage and compute.

Challenges with Centralized Solutions

We talked about security and privacy. Another one is compliance, and I would say cost effectiveness is another one. And these are the kind of problems that we're looking to solve with Iagon. And I think that the way the architectures works allows us to solve this problem. So first I want to talk about compliance which is kind of readily or not as well talked about within the space, especially when it relates to data and even compute. So one of the requirements of certain regulations, including GDPR and HIPAA and different other compliances around the world, is that data or tasks need to be done at a Pacific region and Pacific time. So you can't be storing EU data, for example, outside of Europe. That's one of the premises of GDPR.

Compliance and Data Control

Obviously, GDPR is much more complicated than that, but the premise is that we're structured in a way to solve a lot of these problems, including privacy and security, at the same time, because we give the control back to the user in terms of data storage. So people that are uploading their file have full control of their file. There's no middleman that holds the file, you have full access to it. So when you're actually distributing the file or you're storing the file on the igon network, we're sharing the file and you're doing user side encryption and spreading it across the nodes that you select. So the subset of nodes that you select, how you do that is based on different variables, like performance, availability, location, different variables that are important to the end user.

Geolocation and User Control

And you can geolocate the data in a very simple and easy way. And the same thing goes for compute, and you can do it in the same similar manner. And the reason why, and then the third last one I was talking about was cost effectiveness. With this model and with all deep end projects, we're allowed to solve this problem of cost effectiveness because in terms of high cost, especially for the compute side, because we have the reward mechanism in terms of the token which is given to the people that are providing their idle capacities like storage and compute, to the network, so thereby reducing the cost by up to 80%, which is a big deal for not just regular consumers, but also enterprises around the world.

Cost Effectiveness and Compliance

And if we're able to provide that with one click compliance that we are aiming to do, we're talking about large savings and large problems being solved, especially for the legal side of it as well. We will definitely come back on the compliance side of things. I think specifically in computing, there is definitely a few remarks there, which I would love to know more about. But before we go to the compute, let's try to structure the Twitter spaces a bit, whereby I would first talk about the storage part, and then we briefly but surely go to the computing part. So it's a bit, because these are two main products.

Traditional vs Decentralized Storage Solutions

Let's go from one to another. One thing I think is I think a question most likely on the mind for many people out there as well. What would be? I think many people here, even whether they know it or not, they use some traditional cloud storage. Most of them do. Specifically, if you are on Android, if you would back up your photos, it's going to be on your Google Drive, which is Google is obviously traditional storage provider. Got storage provider I think for many people, and actually me included is all right, why would I go for a decentralized storage provider like you instead of going to the decentralized ones?

Centralized Storage Providers Comparison

And obviously decentralized ones, there's not many like, there's three big ones, Azure, obviously Google Cloud and Amazon. Which one? Why would someone go to you guys instead of going to the centralized ones? Yeah. So there's some analogy that I like using a lot in terms of why you use these distributed storage like ours or decentralized compared to centralized solutions. So one of the biggest problems in centralized solutions is data leaks. This happens time and time again every couple of weeks. And we've seen so many kind of headline stories of this and recently being the kind of Social Security hack recently in the US which leaked a lot of documents.

Data Security Issues

So there's a problem here. And the analogy I like using is data lake and a secure lake, which is for Igon. So the data lake we can imagine, let's say a lake with a fence around it, the lake representing the server and the fence representing the encryption. And I'm doing this in a simple way. Obviously it's much more complicated than that. But let's say you have the lake, and if there's a hacker that wants to get into the server, which is the lake, so they want to break the encryption or break the fence and jump into the lake. And this happens a lot.

Innovative Security Measures

And when they jump in the lake, they can swim in the lake, collect as many buckets of water as they want. That's the data that represents the data, and they can deal with that data as they please. Right. What igon is doing with their patented technology is something called Secure lake. So imagine different concept, but similar analogy is multiple mini lakes that are completely frozen and also fenced around. So the frozen part represents the blockchain access layer and the fence also represents the encryption layer. Right.

Hacker Prevention Techniques

And the mini lakes represent the distributed shards across the network. So if a hacker comes there and he approaches one of the mini lakes, they can jump over the fence and try to penetrate the data or try to get access to the data. And even if they do, there's no meaning of behind that data, there's nothing there to collect, right, even if they collect it. And the only person that can, let's say, collect those mini lakes and liquefy that lake and make it, you know, accessible is the person that uploaded the file with the private key. And that's why I think you would want to use a distributed solution like ours, because it makes it much more difficult for people to access records and make your data even more secure than it needs to be.

Control and Security in User Data Management

And the other architecture is, that's the first point in terms of security. It makes it much more difficult, and only the person that has the access key will be able to access their data. The other thing is that you fully control your files. The way igon is built is we put the onus or the control the data in the user sense. So usually what happens in centralized solutions is you have a middleman, which is AWS or Google controlling your files.

Access to Data

Even if they say they don't have access to it, they have access to your data and they can read and, you know, read your data and actually learn from your data. And then they have a lot of machine learning and, you know, these kind of things, and they can target you with ads and whatnot. So this happens even if you don't know about it.

Data Control and Encryption

So here you have, you are the data controller, meaning there's no middleman that holds your data, because when you're uploading it's sharded and encrypted, and there's no way to access even the person that has the shard or the node that has the shard will not be able to see what is inside the shard. Right. They have to have access to all the shards to combine the files and to learn what's in that. And the only person that can do that is you. So those are two things that I would say that obviously cost effective is being the other one, but those are the two major ones I would say you would store on a distributed storage platform like iGot.

Sharding and Scalability

So it's like a bit of a shard, right. How we would see sharding also in like scalability for blockchain. It's in a similar sense just now, what basically happens, which is, I think the analogy was really good, by the way, whereby you say, well, we got this data like chucked up in different parts in different lakes, whereby if you would have got access to one lake, you get a really small portion of the data which you cannot do a lot with because you need everything to actually make something up for it. That makes sense. So basically that means the rate of data is being, you would say, distributed. Really. That's actually how, that's I think, the perfect word here.

Performance and Trade-offs

What comes up to my mind if it's distributed, generally, whenever you distribute things or make things more decentralized, things tend to go slower. Most of the times you would have to make some trade offs for scalability, whether you sure you could, we all know about the trilemma and all that, even though it's not necessarily very applicable here. But in my head, if I would get data, I would distribute it over let's say 100 lakes instead of one lake. It would most likely cost me a bit more time to actually get my data. I think for a retail person, whether, let's say I have photos, I don't really care. I mean if I have to wait 5 seconds to see a photo or maybe 2 seconds or maybe 1 second more than I'm used to, but a centralized provider, do I really care? Probably nothing. But for maybe a company which needs data very quickly because they're running, I don't know, a time sensitive application that might actually matter a bit more.

Ensuring Performance

So how do you ensure that? Obviously you make it secure, you try to make it distributed, but how do you make sure it's still. Well, the performance? Is e core close to Eco to the centralized storage providers? Yeah, I mean that's a great question. I think Brock will get into it a little bit more, I guess. But just initial comments, I guess. So. One of our variables in terms of patent, our patent is basically called intelligent decentralized autonomous marketplace for storage and compute. What that does is we're learning the behavior of node operators in different variables, including, and performance is one of them, right? So we learn the performance in terms of their read write bandwidth and different variables that are important to the end user.

Nodes and Performance Metrics

So when you're allocating a Pacific Shard or shard of the data, you're actually selecting different kind of packages or subscriptions that you're signed up for and depending on the package, or let's say if it's enterprise package, you only get the highest performing nodes and not necessarily if it's distributed, it's going to be slower. Obviously at the beginning phase, yes, I would agree that, you know, in the starting phases it definitely will be slower than centralized solutions. But you have to remember that there's less of centralized solutions than there is more nodes, right? So if you're accessing data, you're not accessing it locally, you're maybe accessing it from maybe Ireland or somewhere in Europe.

Latency and Access Points

So even that's true. For example, Twitter, you're not accessing data readily nearby. So nearest point, what I mean to say is that latency could be even reduced at a certain one product like ours get to a certain stage. Latency actually can be even reduced because you're accessing infrastructure that's even closer to home. And maybe Brock can add to that. Yeah, exactly. And you got to remember that when you talk about centralized data centers, you're talking about people who store the entire file, right? So if your file is 10gb, well, you have to download that ten gigabyte file all from one place with one network connection, like one Internet connection through the ISP.

Performance in Centralized vs Distributed Models

And obviously they work with different ISP's for different rolling over if one goes down or whatever. But you're trying to download a ten gigabyte entire file, which yes, it gets chunked, but it's still, all those chunks are coming from the same place, and you're sharing that connection with potentially millions of people who are also trying to download and upload to the same data center. But with distributed storage, you're only trying to download small shards from many different nodes. You get a faster response time from those individual nodes for each of those few megabyte shards, and then they're rebuilt into your file. So there's actually potential for significantly less latency in terms of download and upload speed versus a central data center where they're dealing with entire files stored one, sometimes even just one single drive.

Decentralization and Reliability

Yes, I think that makes sense. Specifically, the part which, well, obviously whenever you decentralize nodes, and obviously, let's assume that obviously this is a bit of a perfect world example, whereby the nodes will be perfectly distributed around the world, whereby obviously that would mean that for the most part, nodes will be a bit more closer than usual. This wouldn't be the case for everyone because maybe in your country there's a big database, but for most people that would be the case. However, now we get into the question, right? It really depends how performance these nodes are, whereby I'm very much wondering, considering this has been a relatively, I would say significant issue for computing, whereby it's for computing and for cloud storage, it's most likely not different, whereby reliability is really important, whereby if there's, if the product doesn't have any, is not necessarily as reliable as a centralized counterpart.

Ensuring Node Reliability

You are basically, it's a problem, let's put it that way, with computing it might be a bit bigger problem than with cloud storage. But whenever a user wants to get their file and a node is maybe not responding, then obviously you could go to the next month, next node or what sort of back fallback mechanism you guys are using. It takes time. So the question would be in this case to actually make it a bit more concrete, how do you ensure it's reliable? So how would you, for example, to provide a bit more context, if there is an underperforming node, how do you ensure that at least that it will be accounted for in the ecosystem? And obviously that underperforming node wouldn't be used again into he is performance.

Consensus Models and Performance Metrics

Yeah, I'll go a bit into the consensus model at Igon. So what we, our consensus models called proof of utilitarian work. So it's a combination of proof of stake and proof of work. So basically, as I mentioned before, we're learning the behavior of resource providers in terms of their idle capacities. So we're learning their performance, their availability, their trustability, you know, their location, the different metrics that we are assessing continuously. So, and then that thereby giving them some kind of reputation score and that reputation score hands and basically is matched according to what the users are asking for.

User Needs and Performance Matching

So obviously for each task you don't require the same kind of work. For example, for a game, for a simple game, you might not need a high end GPU chip, right? But for a complex game, you're definitely going to need that. So same thing here where you need different tasks, need different things where you can as a user select what you specifically need, right. And for enterprise users you will match you up accordingly to what your needs are. And those are the variables that we're matching against and making sure that the reputation score is updated throughout. Maybe, Brock, you want to add anything there?

Minimum Requirements and Reputation Systems

Yeah, that kind of touches on the basics of it really. The overall explanation that, yeah, we have both a minimum requirement in terms of performance metrics. So everything from read write speeds to Internet upload and download speeds to even chip performance itself. And so there is a minimum that you need to qualify to be able to launch a node on our network. And then also we have a reputation system in place that prioritizes tasks towards higher performing nodes. And like Nav said, you know, you can, as an end user you can specify like okay, I need only the best GPU chips for this, or the best, you know, performing cpu cores for this task or on the storage end, I need at least this amount of upload and download speed and then we can make sure to match them up with that.

Centralized Queries Versus Guarantees

Whereas, you know, on a yemenite, on a centralized platform, you may not get those guarantees. And even if you do, we see it with ISP's all the time, who guarantee a specific speed, but don't actually deliver it. But with our system, it's kind of a guarantee. It's not just an account manager directing you towards whatever service looks best to them or whatever. It's actually all a trained model that correctly matches those things up.

Authenticity Checks for Node Performance

One thing which I've seen with other projects, I think Ionet was one of a really good example of this, whereby people, without going too much in specific house, they did it. But they kind of. Basically what happened is a lot of people provided GPU's to the network, and some people kind of forged in a sense of they said, well, this is a 490, but in reality it's like a 1060 or something ridiculous like that, whereby they actually had to act at certain point, whereas the right, we have to integrate some proof of work metrics.

Ensuring Compliance and Authenticity

And also it went as far where they actually implemented odd zero to check, is this the real GPU which you are providing? Is this the real deal? Well, this is not a perfect solution, because, well, if you obviously put an authentication layer on top of it, which is a centralized authentication layer, well, it's not decentralized anymore. You have a real single point of failure. So how do you guys ensure that specifically, I think this is more of a computing questions, but maybe yours is also applicable for cloud storage.

Ensuring Compliance in Decentralized Systems

How do we ensure that if people provide something, could be a chip, could be storage, that it actually provides what they are saying they provide? Because obviously this would only be the case whenever that specific node is actually being used. By Ionad, there's a lot of GPU's which are just idle. So until these are not used, no one knows. Well, this is not the case anymore, but before it definitely was, whereby no one really knew whether the 1490 was an actual 4090 or not, because it was not being checked.

Node Benchmarking and Consistency

When you first set up your node, there is a benchmark test that runs and actually tests the real performance score of the system. And then for the sake of consistency, there's periodic heartbeat tests to make sure that this hardware hasn't changed at all. You know, even if it's. If we run 100% benchmark tests, like stress test on it when it's first signed up, we can also find the 50% or the 25% scores.

Monitoring Channel Changes in Performance

Right? And then if later on, if it runs a periodic 25% stress score test and the score doesn't align anymore, then the network can tell that maybe something has changed in that hardware, whether it be the chip dying or the hardware actually swapped out physically, and then it knows to flag an issue there. And then we can let the user know to start looking into their hardware and make sure that nothing is wrong with it. And then if they sort out the issue, they can continue getting tasks.

Scheduled Tests and Performance Control

All right, that makes sense. And the beginning benchmark, sure. I think that's very standard, but I think it doesn't help a lot. Right. Like you say, well, you could just swap out. Obviously it's not that easy, but with the technique, with enough technical knowledge, you could make it seem like you're 4090, even though you swap it out for 1070 or 1060, some degrading card, you could look like it's 40 90. So are like these interfault tests, are they just at random like the tests which will come after the beginning tests.

Testing Frequency and Integrity

They'Re regularly scheduled. So we have the same system with the storage infrastructure as well, where every so often a test is run. And it keeps an eye on the performance scores so that their web dashboard also can plot out and chart their score history, like their performance history, and make sure that everything is running correctly. So yeah, they're regularly scheduled, so it would be, it'd be very difficult to.

Continuous Monitoring and Performance Stability

And they're small intervals as well, so just to name that. All right, so it, like, even though they are. All right, but like if I know if they are scheduled and regularly scheduled and let's say I'm a malicious intender, wouldnt it be possible to say, all right, im just going to, lets say every night, 01:00 a.m. tuesday, every week there is a test I will just swap out on Monday to card again and no one will batch an eye because well, it seems like its a 40 90.

Addressing Malicious Intent

Well, youre going to be constantly swapping every couple minutes then that would be kind of physically impossible to actually keep up with in a malicious manner. They're a lot more frequent than just like once a week sort of thing. Like our dashboard measures your performance history in a much more tight scheduling than that.

Compliance in Decentralized Systems

Okay, that makes a lot of sense. So, right, it's not going to be on a specific day, but the interface so small that it's even, it's not, it's physically, well, it would be possible, but it's probably not worthwhile to do so I think that's the point. Understood. And talking about the computing part, actually, I think you made a really good point about GDPR and all the compliance which are there.

Compliance Challenges with High-End GPUs

I'm very much wondering how you guys will handle the compliance part on the GPU side. And let me give some context. What I'm actually going to ask. One of the biggest issues right now is obviously whenever you have GPU's, specifically the high bond GPU's, and we are talking about the data center GPU's, the H 100, H 200s, even right now they are embargoed for the most part. What does that mean? That means that Nvidia, for example, is not allowed to sell them to certain countries, including China, Iran and a few others.

Geopolitical Considerations in Technology

And they are obviously the reason why I think everyone can fill this one in. But it's just to ensure that China is not a full, evolving a bit more in AI and all that sort of stuff. But if you make it decentralized, it's going to be quite hard to actually make sure you're compliant to that embargo. And. Yeah, well, I've heard people say, well, but we can just check geographically whether, if they are, whether they are operating out of China.

Ensuring Compliance in Decentralization

Well, they could just run a VPN and then you don't know. So I'm very much wondering how you guys are ensuring that your GPU's, for example, won't be used in, for example, countries where apparently it's not allowed.

The Role of Icon in Decentralized Compliance

I'm not saying it's a good thing, but the status quo, it's not allowed there. Yeah. So we want to make sure that we relay the message clearly in terms of what icon's role is here. So icon is kind of like a middleman, kind of like Airbnb, if you will. We're just matching kind of the end users needs with the kind of idle capacities that are available. So we're not directly involved with the compliance, what we do is put the onus on the consumers to comply with the regulations. Right. And we also give the same options, for example, for the node operators. Right. So node operators that are going to be operating in different jurisdictions will have the option to not serve different countries or not serve different locations in different jurisdictions.

Node Operators and VPN Solutions

VPN's are something that can be solved by everyone. People can use high tech VPN's and bypass these kind of rules and restrictions, but they can do that with centralized solutions as well. So it's something that we have to build these tools to give options for both sides, consumer and node operators. We don't want to be treading in the path of controlling you know how it's set. We are basically providing the tools and the means to do as the user or consumer wish. And it is up to them to comply with the regulations. But they have the power to, let's say, not, let's say store files from consumers that are coming from here, or for example, consumers that are using data from here. So, or, you know, they don't want to be hosting websites or they don't want to be, you know, doing certain things. So we'll give those tools available for node operators.

Regulations and Decentralized Networks

So I don't know if Brock, you want to answer and add anything else there. No, I think that's exactly it. Like when you're talking about a decentralized network, that's kind of just the nature. It is decentralized. So, you know, it's not our job to apply privacy blockers or anything like that, regional blockers, that's just not the nature of a decentralized network. So that's kind of, that's not really our thing. All right. I tend to, I tend to be a bit in the middle here. I can definitely understand your argument, and I would agree to a certain extent whereby I also can see the fact that you guys are facilitating it.

Facilitating Resource Sharing

You are facilitating one part to another party, whereby you are making it possible. Without you guys. Well, I wouldn't say without you guys. It's not possible because, well, you could make the argument there are other platforms, but without these platforms, as in the marketplace, it will be completely, it would be near impossible to provide your resources, your underutilized resources in terms of GPU to another party. It would be very hard without, it's not necessarily worthy to really have a discussion about it here. Why? Because I think it's a bit of a legal question as well. I was just wondering what your view is on it.

Navigating Legal Complications

I wouldn't say it's bad for you at all. I get answer for the most part if I ask deepen projects, how do you efficient this? This is most of the times what they would tell me. And honestly, there's not much really to talk about as well in terms of legal, because it's also going to be a bit more on the speculation side. Why? Because most of these platforms are like yours, are relatively new. There's even more questions. For example, another question which I ask most of Dbrand projects, and they wouldn't have a lot of good answer on this too, is one thing.

GPU Card Regulations

What data centers are not allowed to right now is to have 4000 nineties, 4000 eighties, 4000 seventies why? Because it's not allowed by Nvidia Stes to actually run these cards. They are specifically made from consumers. They are not database center cards. So they're not allowed to actually be on these database centers. Whereby the biggest issue here is they don't give a database or data center. Excuse me. They don't give a data center definition. So what is a data center? Is someone running ten gpu's on a deep end project with ten consumer cpu's?

Uncertainty Around Data Center Definition

So the 40, nineties, 4000 seventies and so forth, is that considered a data center? We don't know. So these sort of questions are very interesting to see how Nvidia will actually fall with the technology and see whether they can allow it or not. Because if they wouldn't then that's the big issue. Because then you are not allowed to use the drivers and so forth. It wouldn't be a pleasant experience. Before I go to the next question, would you guys like to add something to that or would you like to move to the next question?

Challenges in Compliance Discussion

Yeah, I mean, I can add a little bit to it. So yeah, compliance is definitely a difficult topic to discuss and I think that you have to try and that goes for centralized solutions as well. We're not just talking about deepen projects, we're talking about in general, it's, you provide the best possible means to comply with the compliance and we're actually trying to make it very easy and simple. But these kind of issues do arise for centralized and decentralized solutions. Right. So it's something that you can do to do the best availability.

Political Influence on Technology

So if you're saying that, and I agree with you to some degree, that, you know, that, you know, governments at the end of the day do have some power, but we can even take it into it like even a smaller minute or smaller level. Right. So when you're thinking about Nvidia kind of banning certain chips or make putting a market on it, so, and not allowing China. So we also have to understand that most of these chips, or most of these chips are made from minerals that are from China.

Supply Chain Challenges

So if they're, if eventually China does want to put a block on it, they can actually put block on all of these chips because without the minerals there's no chips to be made. So it could go, I mean this is a more political level, but if we're talking or going into veering into like these small details, I think that you have to take into account everything. That's a fair point. And I think. I think definitely, obviously we have seen, well, we actually have seen it to a certain extent, whereby I think Covid-19 have shown us what a supply chain squeeze can do, specifically in the chip market, whereby we are still suffering to a certain extent of the Covid-19 outbreak.

Impact of Covid-19 on Chip Supply

Whereby supply chains are back on track. But, well, we have seen, based on multiple reports, that the demand of chips is way bigger than the actual supply. And that is partly due to the actual supply constraints which have been there for two years. So I completely agree with you on that fact, even though I agree it's a bit more political, I would say we have seen it already being happening in real life because we are slowly but surely not going to the next section. I have one more question, and that's a bit more on the blockchain part.

Choosing Cardano for Blockchain Development

Obviously, like I said earlier, in the spaces, Cordano has been a bit of my blind spot, like I said earlier, not necessarily because I don't like Cordana or I think it's a bad. Not at all. It's just as much. You can look into layer ones, there's so many. So I'm very much wondering what made you. What was the decision to build on Cordano initially instead of maybe, I don't know, you can. Ethereum, avalanche, whatever. Yeah, actually we built on. Actually, we started our MVP on Ethereum, actually.

MVP Development and Transition

So we're a project from 2017, and our first MVP was released on Ethereum in around 2018 to 2019. So we realized quickly that from a business aspect that, you know, there, Ethereum, well, relatively quickly in a startup term, that it wasn't scalable enough for us to reach the kind of business model that we wanted to reach. First of all, scalability, security and cost. All right, there's a few things that. That we think that Cardano kind of thrives in.

Advantages of Cardano Over Ethereum

First is like, for example, for Ethereum at that time, transactional costs were very high, and we're able to currently offer 1 month for $2 and that's about dollar 24 a year. So if were able to do a transactional cost on Ethereum, that would kind of double the cost right away at that time. And for Cardano, it's much, much lower. And also the scalability issue where, you know, Ethereum is kind of building backwards in a sense that, you know, they're moving from to a proof of stake model and trying to move to a proof of stake model, whereas Cardano built very slowly, methodically, but has kind of had the answers of how they're going to scale and what they're going to do to scale.

Decentralization and Security of Cardano

And that's the other kind of. It's much more decentralized in a sense. There's so many single node operators across Cardano, which makes it a much more distributed system and thereby also making it more secure. All right, that's understandable. I agree. I think specifically back then Ethereum was very expensive. Nowadays it's a bit cheaper, but it's still most likely it's going to be more expensive. Corn anno for the most part, probably not going to change for a while on that. Sure, it could change over time, maybe.

Scalability of Ethereum and Future of Cardano

There's definitely a lot of work on the scalability of Ethereum. Completely agree. It's not very scalable. Unfortunately, even now it's not entirely scalable. Whenever, I mean, we like, if we are talking about this moment, sure, it's relatively cheap. I think normal transaction is probably $0.40 ish, which is outrageous if you would compare it with web two transactions. Imagine paying $0.40 for every transaction you do in your bank accounts. Probably you wouldn't be happy. Sure, it might change over time, but it will take a lot of time.

Operational Costs and Business Decisions

I mean, even going from proof of work to proof of stake, going to ERC 4337, it takes so much time and this wouldn't change, unfortunately. So I completely understand our business perspective. It made a lot of sense that you went for a chain which is more scalable and allowing you to basically decrease operational costs, which obviously one of the things you would like to decrease as a cash flow oriented business. Before we go to the next section, which will be a bit about what the future and the challenge you guys have faced, I'm wondering, obviously you guys have a token.

Token Utility and Rewards

How does the token come into all of this? Yeah, so there's a few ways we use the token. So token, first of all, it's rewarded for idle capacity. So people that are providing their idle capacity in terms of storage and compute get rewarded in IIG tokens. Eventually it's going to be used as a payment method for the subscription as well. And the people that are providing their idle capacity have to stake a certain amount of IAG as kind of like a trust bond between the network and themselves.

Future of Token and Subscription Fees

And that IAG is staked and they earn about 90% of the subscription fees that are coming in to the network from the storage and compute. And what we're basically going to be doing is 10% is going to be used for treasury development and part of that is going to be used to buy back some of the tokens to continue on rewarding for the idle storage compute for the kind of the Runway it's about ten to 15 years for storage and compute rewards. And hopefully by then the subscription fees will be much more than the actual rewards that are coming in.

Token Conversion and Additional Revenue

IIG tokens. I don't know if I missed anything out, but Brock, did I miss anything? No, that's the base of it. There's two earnings tokens that we talk about and one being IAG like was mentioned, and the other being ADA. These tokens in the backend remain as those. But we do plan to add some interoperability between tokens that you, if you want to earn and claim your earnings in USDT or USDC or whatever, we'll add all that in and if you want to pay your subscription in bitcoin or whatever else that'll be added into.

Advantages of ADA in the System

But in the end it all gets converted into these two tokens that are back end handles for simplicity. Yeah. And you converted to Ada, just, that's a, you brought me, I remembered one thing when you brought that up. So they're converted to Ada and then AdA will be delegated to a cardano node which will earn an extra kind of revenue on top of that. And that will also be distributed to the node operators.

Earnings and Rewards for Node Operators

Yeah. So the utility of ADA alone is great in the simplicity of it and the low transaction costs. Of course there's a lot of benefits there. So yeah, the IAG, that's staked to operate the node, you know, you earn a rewards APY percentage that it's currently capped at 30%. So we're seeing nodes earning between 20 and 30% APY on their initial staked IAG. And then the AdA fees which the users, the end users use to pay for their subscriptions, that's what 90% of that goes towards the node operators. And like Nav said, 10% is used for treasury development and also more token buybacks, which is great.

Emissions and Staking

Yeah. And the emissions are going down as more and more IIG is staked, of course. Exactly. Yeah. All right. That makes sense.

DeepMind Projects and Competition

I'm also wondering about, because one thing which we see quite a bit with DeepMind projects is they obviously they are super competitive in general. Deepen is relatively competitive. I think specifically the computing side of things is very competitive, whereby some of them even subsidize their prices for every hour, for example, to really put out and be competitive. Obviously that's not really sustainable. So I'm wondering what is your current operational cash flow where do you guys get the operational cash flow come from? Is it from the subscription model? You just take a percentage of that?

Current Operations and Financial Stability

No, we have a development token allocation. So we have 80k USD a month for development costs and Runway 80k USD worth of tokens in terms of every month. And that's the current Runway. We do also have support from grant programs like Innovation Norway and also Catalyst, project Catalyst through Cardano that are also helping us kind of produce the different products that we're looking at. I forgot to talk about just going back a little bit, single asset staking that a lot of projects like doing.

Single Asset Staking and Delegated Staking

We wanted to actually have a purposeful single asset staking because we believe that, you know, there's no just staking a token and earning yield is not. It's counterproductive. So what we're trying to do here with single asset staking is delegated staking, something similar along the lines of what Cardano is doing. Basically, if, for example, you're a node operator and you want to expand your services, but you don't have the IIG tokens to do so or not the capital do so, but you have the devices to do so, you can ask for token holders to delegate IIG to your node and expand your services.

Future Plans for Sustainability

So this is a very cool once, let's say unique tool. I say also within the whole space, it's very, I think one of the first to do that. And also it'll be liquid staking, meaning that when you are staking, you are able to sell your staked IAG as well on the open market for a discounted price and don't want to wait till the unlock happens. All right, so he said, well, you guys have a token allocation for development, but obviously that would run out at some point because, well, you don't have infinite coins.

Achieving Profitability

So what is the future plan to make sure that, well, the operations will be sustainable and basically you earn money through your operations and that should be sustainable to actually continue. Yeah, that's the kind of one one of what projects are trying to achieve, right, to achieve revenue and try to get enough funds in to be profitable at one stage. And this is the kind of the biggest question that, you know, or let's say, problem in cryptocurrency that is not answered so often in terms of, you know, when it comes to those questions, because usually the burn rate is much higher than the revenue.

Client Discussions and Revenue Generation

We have a few pilots and web, let's say web, two enterprises that we are in discussion with that, and it's not like the traditional kind of pilots that you will see across the cryptocurrency space. These are big clients, and we obviously are doing it a little bit differently than a lot of the other projects. So once these pilots kind of go through and are able to announce publicly, people will understand why these can bring in enough revenue.

Referral Programs and Commission Models

And obviously, in general, we do have a referral program that is coming out hopefully early next year that also incentivizes the community and people that have the know how, like in different industries that have the know how to spread the word across Igon, it's going to be a kind of like a referral program that kind of incentivizes the community to go out and then garner adoption. We also have a commission model, meaning that if you do sign, let's say, enterprises, you get a certain commission kickback for signing those deals. So we have a few plans and we have a few years to get this adoption going, but I think we are on the right track with these pilots.

Continuous Growth and Sustainability

And also eventually the referral program and adoption will likely come. Yeah. And even without partnerships and all that, 10% going towards our treasury and development of all user fees that are coming in, as long as our network is earning as a whole, then our development funds, that pool continues to grow. So 10% is a pretty good share when you start scaling up the operation.

Team Structure and Future Operations

Our development team is pretty lean. We're about 30 people in total around that to operate the entire company right now. And that's in a period where we're building everything all at once. Eventually, all of the product will be out and operational, and then we'll just need a smaller team to maintain all of the backend so the operational costs can go down while the earnings go up, and there will be a healthy self funding situation going on there.

Cash Flow and Profitability Challenges

Okay, I completely agree. I completely agree on the fact that cash flow specifically, the funny thing is, whenever we are in the spaces and when we ask about cash flow, some projects would come up with not operational cash flows, because we completely agree with you guys. It's very hard to be profitable in crypto, a few infrastructure projects in general, infrastructure project and defer projects seem to be the most profitable, whereby RWA gamify.

Future Operations and Revenue Plans

And deep in, it's still very hard because you're competing with web two, which is obviously harder than just having an infrastructure project where you're not necessarily competing one one with web two. And why is Defi profitable? Just because it's such, it's much more scalable, it's much easier to scale because you can just take a transaction fee on every transaction, whichever you made, whereby it's a bit easier to be profitable. That being said, that's also why there are so many DeFi protocols, and I would say 95 of them is not profitable.

Onboarding Partnerships and Community Engagement

It's just the big ones which are profitable. One thing I'm wondering, and obviously you guys have talked a little bit about it in your. You've thought about this is our plans to come be profitable. This is our plans to onboard more partners and actually leverage more revenue streams. Is there any other future development plans or maybe a roadmap for igon which the community should know about?

Ambitious Roadmap and Product Development

We have a lot going on. So icons, we have an ambitious roadmap and were trying to work on these kind of different products. So we're not just a deep end project, right? I mean, we are. That in a sense, that's our kind of core goal and core mission is building decentralized or distributed storage and distributed compute.

Innovative Use Cases

But were trying to also not sit there idle and wait for adoption. We're trying to create use of use cases on top of this infrastructure. Use cases. We believe that, or products that we believe that will drive some user adoption. So we have some apps coming out, for example, some. Something like a Dropbox, something like we store it, we transfer.

Upcoming Applications and Devices

Sorry. We have kind of a messaging app coming out at the end of the year. And we have simple, like we mentioned earlier in the call, which is Cyclone, which is available for preorder now, which is a plug and play device for people that aren't tech savvy, so they can plug and play the device, run a storage node or a compute node at home or, you know, at their office.

Roadmap and Reputation Protocol

So this is something that is available now, and pre orders are available now. We have also other things that are coming up in our roadmap, which is kind of stature, which is reputational protocol, which we can go into in a whole other session. But it's basically defining quantitative and qualitative measurements of people's actions, positive actions, throughout the web three space, and quantifying them in a qualitative way in terms of receiving badges.

Quantifying Positive Actions

And when you receive badges, you earn points for on chain activity, for example, that is positive in the network and earn a reputation at the end. And you can use this reputation across the web three space, meaning not just our protocol, but across different protocols, and build up your reputation and be kind of a viable asset to the web three space.

Development of Reputation Protocol

We have this, and this helps us directly because we have this kind of reputation protocol for our node operators so specifically needed for us, but it can be expanded into the web three space. We have a suite of apps coming, something along the lines of Google Suite, where they have Gmail, where they have docs.

Rolling Out Use Cases

So we have ambitious roadmap coming up and it's going to take some time for sure, but we're rolling out these use cases one by one to drive the adoption and grow our user base. Currently we have around 203 subscribers to our network and we haven't really gone into the marketing yet. So we expect that to rise much more later on.

Building on Infrastructure

But, yeah, a lot ambitious, Matt. Yeah, yeah, exactly. We're very big on building on our own infrastructure, a lot of proof of concepts as well as building products that we truly believe are useful to everyone and that also utilize the network.

Creating More Applications

Right. So that drives more adoption and more usage of our own network. So we're not just building a network, we're also building a plethora of different applications and use cases on that infrastructure on that network to really prove like how much you can do on it and how easy it is to develop with.

Community Engagement and Contribution

All right, that's awesome. I'm definitely going to look out for you guys. It seems like there is a lot of lot coming. A lot of cool, actually. A lot of cool. A lot of cool. A lot of cool. A bit different things. I didn't really expect that. So I really love that. I will definitely keep an eye out.

Invitation to the Community

Obviously there's a bunch of people on this call Twitter space, whatever we want to call it. And I'm quite sure a really good portion of them are very excited what they are hearing and maybe they are wondering how they could contribute to Eigen. So where do they have to go if they want to be part of the community or even want to be part of.

Joining the Community

Well, actually want to maybe provide their value to the ecosystem. Yeah, I mean, everyone can visit www.iagon.com and you know, if you scroll all the way down to the page, you can see all of our socials as well. Discord. We're very active there. All of our note operators are there as well. And you know, we also have telegram, but recommend Discord and Twitter.

Transparency in Communication

We're very active. So if you follow us, you know, and also turn on the notifications, we have updates all the time. Every week we have development updates, you know, so we try to keep the community update as possible. We try to be a transparent as possible as well.

Building Community Trust

You know, that's one of the main things kind of lacking in the web three, community is trust. And I think that by being transparent, it helps a lot to be to communicate with the community and actually assess what if they're getting any feedback and positive feedback or negative feedback and improving on those.

Community Engagement and Feedback

Right. So also encourage you guys, that was a great AMA with House of Chimera. Thanks, guys. I appreciate you guys reaching out to us and having this discussion with us.

Closing Remarks

Oh, yeah, absolutely. Sorry. One more thing. I just wanted to slip out there. If you're also coming from a project side and you're interested in maybe utilizing our infrastructure and cutting down operational costs significantly, please do reach out to any of the team members and get in contact or reach out to Nov directly.

Providing Solutions for Projects

You know, where we provide solutions that can really cut down on projects storage costs or their front end hosting costs. So we're looking forward to making more connections with projects and helping them get migrated over.

Final Thoughts

Yeah. Awesome. And before we go, I always give the guests, in this case, the guests, the last note. So in your case, if there's anything which I might have missed or something you would like to put our emphasis on, well, this is your chance because I don't want this stage.

Concluding Message

Yours. At Iagon. It's your data, your control, your future. Well, I mean, it's short, but very sweet. It's a strong message. I completely agree. We have been looking into you guys and we are very impressed. We'll definitely continue to make content and see where you guys are going and how you guys are evolving over time.

Thank You and Future Discussions

For now, I would like to say thank you for coming on. Let's definitely do another one a bit later in the year, maybe in Q one, whenever a few. I think the messenger app you talked about is scheduled for early 2025, if I did understand that correctly.

Future Collaboration

Yeah. At the end of this year. Correct. Awesome. Well, then we maybe can talk about a bit more about that and how that is evolving and we can obviously do a bit of a look back. And I also would like to thank the audience for sticking with us for over and Hauer, listening to us, actually discovering what you guys are doing and more.

A Commitment to Future Engagement

Is happy to do another one later, either this year or next year. Yeah, thank you. I appreciate everyone listening in as well. Thank you for hosting us and Chimera, appreciate it. Take care.

Leave A Comment