Space Summary
The Twitter Space Linera x Argument: Real-time, verifiable computing hosted by linera_io. Linera is a pioneer in real-time, verifiable computing infrastructure, focusing on ultra-low latency and security enhancements through microchains. Founded by @ma2bd, Linera's innovative approach aims to revolutionize blockchain technology for instant data processing and responsiveness. By tailoring blockchain for real-time applications, Linera stands out in optimizing the technology for specific real-time computing needs, setting new standards in the sector.
For more spaces, visit the Infrastructure page.
Questions
Q: What is Linera's primary focus in computing?
A: Linera specializes in real-time, verifiable computing with a focus on ultra-low latency.
Q: How does Linera enhance security in blockchain technology?
A: By using microchains, Linera improves security and reduces latency in blockchain operations.
Q: Who founded Linera and what is their vision?
A: @ma2bd is the founder of Linera, aiming to optimize blockchain for real-time applications.
Q: Why is reducing latency crucial in real-time computing?
A: Low latency is vital in real-time computing for immediate data processing and responsiveness.
Q: How does Linera stand out in the blockchain sector?
A: Linera distinguishes itself by focusing on real-time infrastructure and security in blockchain technology.
Highlights
Time: 12:45:17
Linera's Innovative Infrastructure Exploring how Linera's infrastructure is optimized for real-time applications.
Time: 13:15:29
Microchains for Enhanced Security Understanding how microchains contribute to improved security in blockchain technology.
Time: 13:50:45
Founder's Vision for Real-Time Computing Delving into @ma2bd's vision for optimizing blockchain for real-time computing.
Key Takeaways
- Linera focuses on real-time, verifiable computing with a specialized infrastructure for real-time applications.
- The use of microchains by Linera significantly reduces latency and enhances security in blockchain technology.
- Providing ultra-low latency is a key priority for Linera's infrastructure.
- Linera's innovative approach aims to revolutionize real-time applications and blockchain technology.
- Founded by @ma2bd, Linera is at the forefront of optimizing blockchain for real-time use cases.
- Blockchain technology can be tailored and optimized for specific real-time computing needs.
Behind the Mic
Introduction and Welcoming Remarks
Hey, everyone. Welcome. Just waiting for co hosts and speakers to accept. Stay tuned. Gmicro chains. Gmicro chains, everybody, and welcome to the X spaces to announce the partnership between Linara and argument. My name is Danny Green. I go by DJ on X and I'm your host for today. We just got off our community hour on Discord. We had about five. It was awesome. And hopefully a lot of them are making their way over from the discord over here to the spaces to hear a little bit more about this partnership that we announced a few days ago and then just talked about a little bit on the community hour. Want to welcome our guest speakers for the day. Matthew, how are you feeling? How you doing? Hey, hey, everybody. Yeah, great to be here. So excited about this announcement and to be able to tell you more about it pretty soon. Wonderful.
Guest Speaker Introductions
And John, how are you feeling today? Doing great. Doing great. Yeah. Really excited by this partnership and excited to be able to chat with Matthew about it here on x. Awesome. Thank you guys so much for being here and for everybody else who's here in the audience, thank you guys for being here. Hopefully we'll get all your questions answered, talk a little bit more about this, but just some housekeeping before we get into the details. In a bit, we are going to be giving away a secret password to get a poap for attending this space. But to be able to claim the PO app, you're going to need the Poap mobile app. So if you have not done so already on your phone, go download the PO app from your app store and enter in your wallet address that you use to claim Po apps.
Important Details Regarding the POAP
And then there's going to be a way for you to click the button on the bottom right side that says, you know, get Po app and. And we're going to give you a secret password. You'll be able to enter in that secret password at the right time and that will allow you to get your PO app. It is a limited edition Po app for just this call. There's only 200 available. So it is going to be a first come, first serve for the first 200 people who are able to claim it with the secret password. So I'm not going to tell you the secret password yet, and I just want to make sure that everybody knows how to do it and that you kind of have this advance warning right now. And we'll come back to that a little bit later in the call.
Speaker Backgrounds
Just introduce Matthew and John, who you just heard say hello. Just so you guys have a little bit of context since I know they're not going to necessarily talk about themselves so much. Matthew Bade is the founder and CEO of Lanera, and during his nine year tenure at Meta, Matthew was instrumental in the development of the Libra Novi project, the web three project over at Meta. He holds a PhD in computer science and specializes in BFT consensus protocols, cryptographic protocols, and formal verification. Matthew, did I get your bio more or less correct there? Yeah, yeah, that will work. That will work. We want to hear about John. Well, let's hear about John. That John is the CEO of argument and has worked at the intersection of functional programming, cryptographic protocols and startups for over a decade.
Discussion on Zero Knowledge Proofs
He is the author of the Utima compiler, which allows formal proofs of arbitrary size to be compressed into a succinct zero knowledge proof. John, I'm sorry I didn't understand all of what I just said, but maybe you'll be able to explain it today on the call. Yeah, absolutely. So I've been working mostly in the functional programming space for a while now, have done a lot of projects in the Ethereum, Tezos ecosystems, protocol labs, Filecoin ecosystems. This is kind of what brought me into zero knowledge proofs, which is what we're mainly focused on at argument now. I think that argument's unique as a company because we have not only this really strong cryptographic background on the team, but also this programming languages background. I'm not a cryptographer by training. I know a little bit now, hopefully, but my background really is on the design and implementation of functional languages.
Technical Insights and Future Projects
And so the Utima compiler, I think, is a great example of this. We figured out a way to turn the lean theorem prover expressions in the lean theorem prover into lurk programs and be able to type check those programs in such a way that you can take a formal proof of some mathematical statement, like Vermont's last theorem, for example, famously, and then compress that into a succinct zero knowledge proof. So yeah, the idea there is that Fermat in the margins of this textbook wrote a theorem about the properties of natural numbers that puzzled mathematicians for centuries. And he said, I have a really marvelous proof of this theorem, but this margin is too narrow to contain it. And so we a couple years ago looked at, we're thinking about similar things in the context of verifying properties of smart contracts on chain, where you have a really restricted ability to do computation, but you want to know that your smart contract has certain type or type properties or mathematical properties, or that it is a valid ERC 20 and this.
Zero Knowledge Proofs for Smart Contracts
And what zero knowledge proofs allow is they allow you to take those properties, whether it's a mathematical theorem like the Fermat statement, or a proof that this smart contract is a valid ERC 20, and create a really small, lightweight certificate, maybe a couple kilobytes in size, that can be checked efficiently on chain. And so there's a project right now that is formalizing for Mott's last theorem in lean. And when that's completed. So that's by Kevin Buzzard at Imperial College London. When that's completed, and when we have our next version of the Utima compiler built on top of the new stuff that we're doing with lurk, we're going to be able to take that theorem and create a lurk proof out of it and put that lurk proof on Ethereum. And the proof is small enough that actually you can create a QR code, and it actually does fit in the margin of that textbook.
Fermat's Last Theorem Validation
So not only is Fermat's last theorem is correct, but his corollary, which is that it doesn't fit in, the margin, is wrong. And we hope to have a constructive example of that, you know, within the next year or two. That's awesome. I love it. And that's how that's what gets you. And also that's what got you into zero knowledge proof. This, not this particular theory, but the idea to make mathematical proofs shorter on chain. Yeah, exactly. So when I was working on Tezos, the Tezos, Washington, very interested in formal verification, because I'm sure as many people on this call have noticed over the years, blockchain applications have a tendency to have security failures. They blow up and lose a lot of money for people.
Blockchain Security Challenges
I think we're at three or $4 billion over the past five years in terms of smart contract failures, especially bridges. Bridges are really notorious for this. And so in Tezos five years ago, six years ago, we knew that this was a pattern were trying to think about, how do you address this? And formal verification is this way of modeling software as if it's a mathematical object, which it is, and proving that for all possible inputs to this program, you have certain properties that are obeyed. So it's kind of the highest standard of assurance that you can get about a computer program, better than testing, better than fuzzing. But on the other hand, it's very expensive and laborious to create formal proofs.
Gaps in Formal Verification
And consequently, it's really hard to make sure that those proofs are actually tied to the on chain deployed application. If you're doing this for a smart contract. So we did tons of work formally verifying stuff. You can go look up tezos. It's a whole area of really exciting work. But what I noticed was that we'd be doing all this cool stuff and it would be living in a GitHub repo somewhere, and then the on chain implementation would diverge and we wouldn't update the GitHub repo, and the GitHub repo would bit rot. And it sort of ended up being more, it felt more of a research use case than something that was really providing security to the end user. And all of this was completely opaque.
Push for On-Chain Security Verification
How would you check the formal proof? You would have to go and install Ethereum prover and run that locally, and very few people would be interested or able to do that. My thought at the time was, okay, well, we want these security properties to be shipped on chain so that they're actually there in the same medium that the user is interacting with them, and then their metamask or their local browser or the block explorer can verify this. Unfortunately, blockchains are really resource constrained in the amount of computation that they can do. And so this is what got me into zero knowledge proofs. In 2021, I met my now co founder, Chi Mae Kunzong, who was the lead on the Filecoin proofs team.
Foundational Work in Zero Knowledge Proofs
Filecoin is one of the largest Zksnark deployments in the world. And we started talking, and he told me about this project that he was doing within protocol labs called Lurk, which is a functional language based on lisp that can produce succinct proofs, zero knowledge proofs of its own execution. That was mind blowing to me because I had previously been trying to figure out, okay, how do we cram, like, this really expensive computation of type checking onto a blockchain and was basically banging my head against the wall again and again. We tried so many different things. We tried working with, I think, three or four different kinds of blockchains and integrating our type checking system, but with proofs, that all goes away, because now you can, okay, you have to spend a little bit more compute to create the proof off chain.
The Shift to Off-Chain Computation
But the on chain verification is fixed, is just constant for whatever thing that you compute. And, and it turns out that this property is actually really useful not only for type checking, which is sort of maybe an idiosyncratic interest of mine. And I think it'll be important in the long term, but it's also important for blockchain scaling, for privacy for blockchain interoperability and all these other things. So, yeah, so chime and I, we worked together while he was still at protocol labs. And I was, were doing a startup called Utima, named after the Utima compiler. And then a couple months later, we decided to join forces, and that's how argument was born. So that was back in 2021, and take us to take us forward now.
Progress from 2021 to Present
From 2021 to now, I guess.
Connection Between Matthew and Lanara
When, how did you and Matthew connect? And how did this partnership with Lanara come about? Yeah, so, as I was saying, this idea of resource constraints on chain has been a big thing that we've been thinking about people on our team have been dealing with.
Introduction of the Partnership
And our CTO, Francois Garrilo, a couple months ago sent me this paper and was like, hey, these guys at Lanara, who he knew from his time at Meta, working on Libra with Matthew, seem to have solved some really important problems in this space. And so one of the things that's actually really challenging is that even though zero knowledge proofs are, you know, constant cost relative to the computation that they're doing, they're still for kind of bad reasons, I think, but they're still pretty expensive to do on chain.
Challenges with Zero Knowledge Proofs
Like the gas to verify a proof on Ethereum is, it's like $20, it's like half a million gas. And so it really restricts the actual use case. And people have all sorts of ideas on how to solve this. You can do aggregation, you can amortize the cost over time, but then that really increases the latency. And so the thing that we really have been trying to figure out is like, okay, well, we can do the zero knowledge proof side. We have a really good idea of how to make that work.
Proof Deployment Challenges
But even if we solve that perfectly, how do we actually get these proofs deployed? And so Francois introduced me to Matthew. We had several meetings and discussions, and basically we felt that the linear protocol really just addresses this deployment problem, because if we can get the cryptographic primitives that we need integrated into lineara, then the sky's the limit in terms of performance, in terms of the speed at which we can verify proofs.
Utilization of Micro Chain Architecture
And the micro chain architecture is a really great fit for some things that we'd actually been modeling internally in terms of zero knowledge applications, something called chained functional commitments. And, yeah, so that's kind of how this came about. And I think that we instantly recognize the opportunity here and really excited to be working together.
Matthew's Perspective on Partnership
Amazing. We'll come back to chain functional commitments, but Matthew, why don't you jump in and explain from your perspective why you felt like microchain architecture. The linear architecture kind of was a solid fit for the zero knowledge proofs that argument has been building.
Scaling Blockchain Infrastructure
Yeah, yeah. And so that's very interesting. That's a really interesting partnership because as you know, linear, we like to see ourselves as a very scalable, very low latency infrastructure, blockchain infrastructure. And so the work on zero knowledge proofs was not like the. At the center for me, like even like just a few months ago, and. But it was at the origin of linear arm.
Past Experience with Payment Systems
So we kind of going back to some of the things that I did before this company, and namely. So what I'm referring to is. A. System is a payment system where that was initially so called fastpay. That was an infinitely scalable payment system that I developed with some colleagues at Libra, at Facebook, I mean, meta. And then that payment system later on was extended with privacy coins.
Integration of User Accounts
And so were using, it was not called microchains back then, but were using essentially a user account to manage a spend list, to manage a private state. And I mean, the private state would be the coins. And so there were definitely zero knowledge proofs involved. When I started the project, I decided you have to choose.
Closing the Loop on Zero Knowledge
We already innovating so much on the prime model in naira and the microchain infrastructure. So basically in the white paper, there are a few references to how cool it would be to use zero knowledge technology in the Nira, but we didn't have the demand power to do it. And so it's so awesome to be able to close the loop here and work with experts in the field that are going to build amazing proof management systems on top of the linear programming model, which was inspired by this research that I just mentioned.
Compatibility of Lineara's Programming Model
So it's actually kind of not entirely surprised, when you look at it, that Lineara's programming model is actually good for zero knowledge. But, yeah, but that's something that's really. Yeah, that's really interesting. And then Google was kind of surprising to me.
Technical Integration and Benefits
So I want to understand a little bit more about the technical integration and the benefits here. John, maybe you could start by explaining how does arguments lurk ZKVM get integrated into the micro chains, Liniera's blockchain infrastructure? Yeah, absolutely.
Understanding Zero Knowledge Proofs
So I think that it's worth backing up a little bit and explaining what zero knowledge proofs are and how they're constructed, because I think that's the right way to see why this integration with Lineara will work particularly well versus what other people are doing. So, zero knowledge proof, fundamentally, is a way of proving the result of some computation, of some relation, without revealing any information about that computation.
Practical Applications of Zero Knowledge Proofs
So you could think of, I want to prove that a billion rows in a spreadsheet all sum to a a single number. I want to prove that some accounting model is correct without revealing the actual input, without revealing each any of those rows. So this is really useful, but it's very complicated, and it's a little bit counterintuitive that this should even be possible.
Creating Zero Knowledge Proofs
When you explain this to people, sometimes they think, oh, well, if it takes hours to compute this thing, how do you verify the result of it in milliseconds and not know any of those inputs? And the answer is that what you do is you construct a very complicated mathematical object. You take this computation, and you convert it into a form where it can be expressed over a particular kind of polynomial, and then you construct a probabilistic argument where you say, okay, under these cryptographic assumptions, it's very unlikely that I would have been able to produce this easily checkable result unless I actually have the computation, the underlying computation that this represents.
Building Zero Knowledge Proofs
So the point being is that in order to create zero knowledge proofs of regular computations that are economically valuable, that people have been building in computing for decades, you have to do this very tricky and difficult to understand transformation into a new environment. This idea of one of the people on our team calls them polynomial computers, which I really like.
Development of Arithmetic Circuits
And traditionally, the way to do this is that you basically, by hand, create something called an arithmetic circuit or constraint system. And this is a little bit like programming directly in assembly, but it's not really like assembly at all, because assembly is deterministic. It's telling a machine what to do. This is really like writing rows in a matrix.
Challenges in Building Zero Knowledge Proofs
If anyone's taken linear algebra and has done sort of gaussian elimination or by hand matrix manipulation, it kind of feels a little bit like that you're encoding your computation as, like these rows of a very big matrix, and it's very counterintuitive, and the tooling has not really been there. And so people have been trying to figure out, okay, how do we make it easier to construct these zero knowledge proofs? Because there are some really important things that we can do with them.
Importance of Zero Knowledge Proofs
I mean, you look at projects like Filecoin or zcash, where you can use zero knowledge proofs to prove storage or I keep transactions private. The value is really apparent to people. But the way to build it has been very difficult and very expensive.
Evolution of Zero Knowledge Tools
So the first wave of people trying to solve this was tools like circom, basically languages that print out arithmetic circuits. So they abstract away some of the details of the low level manipulation of polynomials. But they're a very interesting kind of language, but people haven't really found them intuitive.
Challenges of Learning New Languages
And learning new languages in order to do this very particular thing has been kind of challenging. So a more recent development has been something called a ZKVM, which is what we're doing. But a ZKVM is a the observation that, okay, instead of encoding each specific computation in this polynomial computing environment, we can encode an interpreter, a universal interpreter for all computations, and then we don't have to program the polynomials.
Innovation with ZKVMs
We can just evaluate our computations like normal. This is a really cool insight. This is something that should be very familiar to anyone that's studied compilers or computability, how you can use this universal property to abstract away one layer of your stack.
Challenges in Writing ZKVMs
The problem is that it's also really hard to write these zkvms. One of the decisions that a lot of projects in this space have taken is they've decided to use ZKVM architectures that resemble traditional hardware instruction sets, because that's the thing that most modern tooling in programming languages knows how to target.
RISC Five Architecture
A really common architecture for this is RISC five, which is reduced instruction set architecture that was developed to be able to make chips in silicon really efficiently, really quickly, really performantly. It's a great instruction set. There's a lot of different kinds of RISC five.
Compiler Integration with ZKVMs
There are different options you can switch on in terms of whether it's U 32s or U 64s, or what other kinds of cpu level features. But the important thing is that LLVM and the C compilers, rust compilers, know how to output RISC five.
User Experience in Zero Knowledge Applications
So theory is, okay, we're going to make a ZKVM like RISC five, and then people will write their programs in rust and use the Rust compiler and target that same output. And so the user experience will, for people writing zero knowledge applications, be very similar to writing normal applications.
Existing User Experience with ZKVMs
That all makes sense at a high level. And the user experience has been pretty good for these systems mostly. But the problem is, one compilers that are doing this have not been designed to target a cryptographic environment.
Security Vulnerabilities in ZKVMs
So there are some security vulnerabilities, some really unintuitive, bizarre security vulnerabilities that are, that open up in this space. And the other problem is that the fundamentally a zero knowledge proof is not a constructive environment.
Understanding Zero Knowledge Confirmation
It's a non deterministic environment. You're not creating new computations, you're verifying that some computation has already been done, and therefore in most cases there will be algorithms or ways of verifying a computation that are much cheaper than just proving this interpretation of the computation, if that makes sense.
Efficient Verification Techniques
So instead of computing the preimage of a hash constructively, you can just provide the preimage nondeterministically, and that'll shortcut and save a lot of work. There are a lot of interesting solutions to this.
Precompile Centric Architecture
There's one of the ZKVM architectures that we're working with heavily, which is succinct sp. One does something called precompile centric architecture. So you can smoothly transition from programs in the RISC VKVM to programs written to accelerator, kind of like an FFI, directly into those arithmetic constraints.
Importance of Direct Transition
So you can go from this like the abstracted cpu directly into, you can punch through, into the lower layer. That's really important. We think that works really well, but the RISC V is still much slower than we think computations can be in this space.
Core Thesis of Functional Programming
This is where the functional programming comes in. This is, I think, the core thesis of our approach is that, contrarily to the past several decades in computing, where functions are abstractions that you pay for, because when you implement computing systems in hardware, you really want to use mutability and state, and you actually have electrical values in physical registers that are changing over time.
Comparison of Programming Models
And so the imperative model has been much faster and has been what hardware manufacturers design for. And so when you want to implement a functional language on top of this, there's always this mismatch between what your cpu is doing and what the functional language wants to do, anyone who's ever written in Haskell or Ocaml will have an intuitive understanding of that property.
Implications for Zero Knowledge
People do pay this in many cases. JavaScript is essentially a functional language. The complexity of, let's say, v eight, the JavaScript interpreter is purely to navigate that mismatch in the abstractions. But in zero knowledge the situation is reversed.
Efficiency of Functional Models in Zero Knowledge
We believe in zero knowledge. The imperative model is an abstraction that you must pay for, and the functional model is more native, because in the zero knowledge environment, memory is really cheap, and you care a lot about the pure semantics of the function that you're trying to compute versus the actual mechanical details of how you actually construct those results.
Memoization and Zero Knowledge Proofs
What we found with our systems is that this functional model allows for some very powerful optimizations, particularly memoization. So the ability to reuse work that you've already done in a way that the Risc V model really struggles with. So it turns out that this is a performance story as much as a safety story.
The Return of Functional Programming
I find it super interesting, by the way, everything you described, it's kind of the revenge of functional programming, because functional programming has been out there for a very long time now. Like already in the eighties, nineties, you had this functional programming languages coming out, and people were like, this is the way you should program. The code is so much more safe, so much more, like elegant. And you can reason with easily, you can reason with it, you can reason with correctness of the code so much. And so there was a big push for functional programming. Right. I. But then, but then the problem is that every time you would program something in a functional programming language, your colleagues, using c or c, they would destroy you in terms of performance. And so I've been myself in this position where you write something and then it works until you have performance issues, and then you have to write it in an imperative language. And so basically what you're saying is, like, in the world of ZK proof, it could very well be the opposite, that actually the imperative programming model is not the best one, and you actually need to use functional languages for that. So I find it super interesting.
Overview of Nina's Programming Model
Yeah, exactly. Exactly. Yeah, go ahead. Yeah, no, and then, so what Danny wanted to tackle is, I don't think we've actually said why Nina's programming model helps. Yeah. So I think maybe it's worth explaining a phrase that I said before, this idea of a chained functional commitment. And so this is very similar to the definition of a state monad in Haskell or in Ocaml, which sounds scary, but it's a function from some state type to the pair of a new state plus a return value. It's, in terms of its actual implementation, very much not scary. But the idea is basically that this is one of the ways, and probably the simplest way to embed an idea of mutation and changing state into a pure, immutable functional model. And so this is something that we've done in lurk with this idea of a chained functional commitment, which is you have a cryptographic commitment to a function, and you can prove that this function, with particular inputs, will produce a new state and some kind of return value, and then you can chain that together, and you can do this again and again and again.
Modeling Smart Contract Features
And so the idea is that you can model all of the same smart contract features in this way with this pure function that has this incrementally updating state. And it turns out that this model fits perfectly into the Lineara microchain architecture. With the addition of precompiles, Lineara precompiles for verifying lurk proofs, we believe that we should be able to directly slot in our Lurkish smart contract model into the microchain architecture with some additional details for the message passing. And so there is more complexity that we can add. But there's this really happy congruence where in Linara, you guys have been trying to solve this idea of how do we make a blockchain ecosystem that is really fast, really performant, and is able to have a clean division of state between multiple parties, so that Alice owns one microchain, Bob owns another microchain, and they can communicate their updates to their particular states without stepping on each other's toes. That fits exactly in the lurk model.
Practical Applications of Lurk and Microchains
Really what we want to build through this partnership is this idea of, okay, what does that look like in practice? If we take a microchain and we make that microchain a lurk function with lurk data as the state, and we're able to have cryptographic commitments so we can prove all the state updates and we can then aggregate those proofs. And so we have a succinct proof of the entire transition of the micro chain. I think this unlocks some interesting properties. One greater scalability, because now the validators don't actually have to compute those state transitions. This is sort of the classical why people care about ZK. From a scalability perspective, you go from doing verification by replication of the state machine to verification by a succinct cryptographic argument. The validators in a blockchain network have to do less work, but there's also privacy applications to this as well, because then those updates don't have to reveal their inputs to the whole network. The whole network can validate them without necessarily knowing.
Future of Private Payments and Other Applications
For example, some private token was sent from one person to another. Right? Concretely, I can have my own microchain. On this microchain. There's a proof that talks about a state that I keep off chain, and I can evolve my state and update the proofs along the way. And the system, the validators, verify the proofs at each step. And so in my microchain acts as a certificate that the state is valid and is proven in all the transitions that I'm making. To it myself. Right? Yes, exactly. And so what would be applications, like what examples like, so I could have, like we did work for. So obviously. So like privacy, like payments for instance. Is that like what applications do you think people would develop the first? Yeah, I think private payments is a really great example application of this. I think that private applications in general, because I think that a private token could have a lot of other logic attached to it. So maybe private, maybe you also want to do like a private amm that maybe has some advantages.
Exploring Use Cases in the ZK Ecosystem
And I think that the sky's the limit in some sense, because now you have this ability to use this really performant on chain infrastructure to mediate your proofs. And I think that there's, yeah, a lot of the different things that people have tried to do with ZK, I see us as being able to replicate within Linara, whether it's something that looks like Zcash, that say, or Aztec that, you know, private payment model, which would have much better performance, I think in this case, or something that looks more like Penumbra or something that looks more like filecoin, I think could be really interesting. This idea of using proofs of spacetime to prove that particular data has been persisted by a storage provider. And I think that could be really relevant to this topic of data availability that people are very interested in these days.
Building a Universal Platform for Zero Knowledge Applications
So, yeah, I mean, I see this as really a way to build a universal platform for zero knowledge applications that combines both the ability to create performant zero knowledge proofs in high level languages that are safe, that are easy to write, with a really fast on chain environment. So you have speed in both places. And I think that's sort of the conclusion is that speed really is the determining factor on what systems people are going to use. As were just saying, with functional programming, C C have had the upper hand, not because they're better languages or safer languages, but because you can, when push comes to shove and you want to get every last little bit of performance, that's what you use.
Quick Certificate Verification in Lineara Ecosystem
And so one of the benefits, also one of the property of linear that's relevant that we talked about was that when you push your proof and it gets certified in the linear ecosystem, you get that certificate very quickly because there's only a few round trips to the validators execution, the validation is done immediately. There's no, we don't use like a shared sequencer or anything like this that would add latency. And so you get the proof, the certificate of verification very quickly and then you can use it in the other systems, for instance. Right, right, exactly. So there's also this interoperability benefit as well. And I think of this as a way to inject into other blockchain ecosystems that are capable of doing proof verification, any semantics that exist on Lineara.
Integrating Certificates Across Blockchain Networks
So if you have a certificate where the Lineara validators have agreed on some proof, you can transport that certificate over to Ethereum or to Solana or to Aptos or sweat using ZK. And in a way that is much safer than systems that require trusted servers or some subset of trusted servers to do that reporting. I think that there's a number of really great ways in which this can also improve the connectivity of Linara to these other systems. Actually, one thing that's important here is that these provable micro chains would also be aggregatable. So we can not only create the classic like client proof, which is the proof that the of the consensus of the remote chain, but in this case, but we can also create a succinct proof that the state, all these state transitions across all these different provable micro chains is correct.
Enhancing Security and Efficiency in Blockchain
And that then in some sense allows those micro chains to also be secured by the Ethereum validators. And I think that's going to be very powerful. I mean, when ZK is performant enough and it's easy enough to create zero knowledge proofs, I think that this will truly make web three a web instead of a collection of silos, which I think it largely has been, and allow people that are on Ethereum, that have assets on Ethereum to move those assets to other chains or interact with chains that have better performance properties or privacy properties. Right. There's a lot to dug in on this interoperability piece as well, because what we're talking about now.
Conclusion and Community Benefits
So I feel like there's incredible benefits to the developer community with what is possible here, but then also incredible benefits to the end users. And before we get into all of that, though, I do think it's appropriate at this time to drop a secret password so that anybody who's been here is able to claim their PO app. So anybody who is here and you're interested in claiming a POap for being here, you have to have the POap mobile app open the mobile Poap app and then it says mint in the bottom right corner. You click mint and then there's a secret word. And here's the secret word. I think John's going to appreciate this. It's zero knowledge proof spelled out. Zero knowledge proof is the secret word, which allows you to mint the PO app for this spaces.
Minting and Spelling the Secret Word
There are 300 PO apps available and it's first come, first serve. How do you spell zero? You spell zero is a question. Is it z or a z? I'm not sure. Depends on what country you're from. No, you actually spell out the word or z. E r o is you spell out the word zero and then spell out the word knowledge, spell out the word proof. That is the secret password, and. And that's how you mint a po op. For being on this call. There's only 300 available, so it's only open. Sorry. Everybody is checking the spelling of known legend right now, thanks to you. Knowledge. Knowledge is spelled n. I don't know, I feel. I feel after listening to this call that I'm a little stupider.
Expanding Understanding of Blockchain
So if I can't spell, I have been doing some linear algebra by hand on the corner over on the side over here. But no, you guys, this is amazing. I really do feel like you're expanding my understanding of what's possible with blockchain. And, you know, what, this new mathematical developments have been, are allowing us to do both on the linear protocol, obviously, here, but like, in general. And so as everyone is kind of minting their Po apps, I wanted to drop a few questions that have come in from the audience who are people who are listening that have come up so far. So, one question that came from, I'm sorry if I mispronounce your name, but Ki Musdong was, I think, Matthew, this is for you. So you were talking a little bit about functional programming languages versus performant programming languages. And so this question was, the rust language.
Strengths of Rust in Platform Development
What strengths do you say it has for the development of the platform compared to many other languages that projects are implementing?
Discussion on Rust Language
Yeah, I mean, I'm pretty sure Oslo, John would like to say about what we think about rest, but. So I think rest is this. Amazing. So it's amazing language that is at the same time typesafe, so very strong type system. So you, the compiler is going to catch a lot of bugs for you and kind of force you to really think through when you're writing the code, which is exactly what we want. When you write secure systems such as blockchain infrastructure, and at the same time, the language basically is as performance as low level languages like c, C for practical purposes. And so this is an amazing language that, I mean, I think. People have been working in computer science for long enough, know that this is just like, the language everybody was waiting for, I think. And people have tried, they've been, like, research projects, trying to create a saved version of circumental. I know several of them even, like, was an intern briefly in one of them, and then just, it was so hard, and suddenly people came up with this amazing design, Rust. And also they put some hard work in it.
The Importance of Rust
Rust itself took a while to emerge, and so it's really nice. It's really a good time to be a software engineer, I think. Yeah, absolutely. I echo all of that. I think that rust is one of the most important advancements in the field of computing within the past ten years, for sure. This is the solution to an incredibly hard problem, and in fact, in many ways, the problem that were describing, in terms of the mismatch between the functional world that's easy to reason about and the imperative world that hardware is targeting. Rust, through the use of the borrow checker, which uses a really advanced type theory called substructural types, is able to implement something called a zero cost abstraction, where you can use these high level abstractions without paying a performance penalty for them. When you actually go to compile assembly, it's in many ways, almost a functional language. It doesn't really have first class functions, but it does have a lot of the features that you might expect in and that are coming from these higher level functional languages. The first version of Rust was implemented in OCaml.
Functional Programming and Rust
Many of the things in Rust, like traits, are coming directly from Haskell type classes. It's really a remarkable language, and I think that the more people write in Rust versus writing in c or c, the better the world will be. Now, that said, I do think that there are some things about rust where people are over optimizing or are using rust, especially in ZK, to create ZK proofs in a way that is not, I think, a perfect fit for what the problem that rust is designed to solve. So Rust is designed to solve that abstraction mismatch and get into the hardware abstraction safely from this higher level expression universe. And this is actually why people are building the RiSC V and that low level model in ZK, because they really want people to write in rust. But the thing is that the borrow checker is actually not really that helpful fundamentally for ZK, because the problem with ZK is not about making sure that you're using memory efficiently or that you're updating like values in place versus creating copies.
Challenges with Zero-Knowledge Proofs
A zero knowledge interpreter is a tracing interpreter. If you have a value anywhere in your program, that value persists for the entire life of the proof. Like, if you have a variable x and you change x from one to two to three, to 4123, and four are living in your system immutably, and you're paying that memory cost. So I think that while Rust is absolutely an incredible language, it's the language that I probably write the most code in these days and have for the past few years, I think that actually going to the language that rust is emulating, which is essentially Ocamla, has some even better properties when it comes to ZK, because now in ZK, you're not paying the OCaml runtime cost. You can just use your functions without this sort of borrow checking abstraction. The main criticism against functional programming luggages like Haskell and OCaml is that they are garbage collected, and they tend to also allocate a lot of memory.
Garbage Collection in Zero-Knowledge
So to write services that are going to run in the cloud, that seems to be expensive, and then you get a bit of garbage collection. But, yeah, if you're running everything under the old ledge, everything's going to disappear, basically, and be abstracted away in the proof. Right. Garbage collection really isn't a thing that you need to optimize for in ZK. Yeah, that's super interesting. Danny, any more questions? What did you have in the reserve? Well, I think that one of the things about this conversation that stood out to me most was when you guys were outlining some of the applications, the ideal applications that are possible right here. And you talked about this, private tokens, private amms, private payments, and then you opened up the possibility around interoperability across chains.
Exploring Partnership Opportunities
And to me, that's the stuff that really gets me excited about this partnership and what's possible here. And Matthew, I just want to know if there's anything you wanted to add on that front. Well, and, yeah, all of that. And then also, of course, from the point of view of linear, the linear infrastructure itself, we are super excited to fast track our roadmap, to potentially anchor the security of linear into ethereum. And so that's something that's going to be very interesting out of these partnerships to some of the building blocks that may come out of it are going to be immensely helpful also for simply the security and decentralization of the linear infrastructure itself. So super interesting about that. That's huge. That's absolutely huge and really cool outgrowth of this partnership for either of you guys.
Next Steps for Developers
What is the next steps when developers can begin taking advantage of this integration? How will that work? And any developers who are here on this call who've learned a lot about what's possible. When can they start taking advantage of this integration? Yeah, so we are currently hard at work getting a demo up to show what the integration of our cryptographic primitives for verifying zero knowledge proofs into Lineara will look like. So we expect to have that live and open source within the next few months, certainly by the end of the year. And that will really allow people to play with this concept of, okay, what can you do now that zero knowledge proofs are fast, cheap, and easy in a blockchain context? And then I think that we will have a lot of interesting discussions and collaboration on getting that integrated into Lineara itself.
Looking Forward to Innovation
Overdose. The first parts of next year. And my expectation is that if we do our parts all really well on this, that those primitives for verifying proofs can be a part of. When Linara goes live, we will be building some really interesting applications that showcase what that integration can do. And we're very excited. And so everything that we're doing is done open source. And so as soon as we have something that is able to be tinkered with, we will put it up and announce it. And I'm sure that it'll be really exciting to the community. Right. And on our end, it's going, the ability to verify proofs in Lineara will be deployed as soon as it's available and stabilized. It will be available in the Devnet and the SDK and all the development in the software development environment.
Partnership Outcomes
So, yeah, get excited. Incredible. Incredible. All right, so I feel like we've covered a lot of ground here. My final question is more of a fun one. We love memes over here at Lanera. Every week we do kind of a meme challenge. People create memes, a whole bunch of topics. And, John, I don't know, have you ever seen any fun memes around zero knowledge proofs or anything that's kind of in the world of ZK programming or anything like that? Anything that you've seen that's funny or that we could kind of bring back into Lanera to kind of highlight and celebrate this partnership? Yeah, absolutely. Actually, when you were talking about the poap password earlier, the zero knowledge proof, I remembered that the word zero is actually really related to the word cypher.
Cultural Connections with Technology
In Arabic, the word for zero is sephir. And so if you think about cypherpunk and you think about that whole aesthetic and the. The thoughts that have been motivating blockchain from. From the beginning, we kind of have. Have zero in there from the start. So cypherpunk is zero. Punk would be my love. That would be my main meme. I love that. I love that. Cypherpunk, cyberpunk zero. That's. We're going to definitely play with that and see how we can bring that in. John, this has been hugely educational. Thank you so much for explaining so much of what's happening here. And, Matthew, I think that this partnership just begins to showcase the power of Lineara, of what's possible in this type of microchain architecture that will benefit all of blockchain and decentralization.
Conclusion and Gratitude
So thank you guys very much. I want you all to know already we have 169. I didn't make that up. That's actually the number poops that have been minted from this call. Again, the password, the secret word is zero hyphen knowledge proof. And there's still some PO apps available. If you are here on the call and you make sure you go to your poap app and you click the mint button and you type that in as the secret word. Thank you guys very much for being on the call. This has been a lot of fun. I hope everybody's learned a lot, and we'll talk to you soon. Thank you, guys.