On Debt Cycles

Some notes from Ray Dalio’s How to Navigate Debt Crisis discussing framework, historical cases and in general currency and capital markets.

Dalio’s principles – 2 types of economic debt cycles – deflationary depression and inflationary depression
credit and debt become assets and liabilities
simply inter-connected
stacks and denominated within each countries currency
currency devaluations like china lead to an increase in inflation, exports (makes it cheaper to purchase from as opposed to other countries)
leads to inflation of the currency as it is devalued

credit markets tighten when interest rates are already lowered and money is continued to be printed
flight assets and hard assets like gold and bitcoin become more attractive as capital flight takes place
the tariffs imposed on the chinese imports have been responded with by devaluing their currency
this is also increases the value of debt to debt denominated in that country’s currency

when there is debt denominated in another country’s currency it is much more difficult to navigate because they cannot print more.

october 2019, mnuchin says government will have to print more
june 2020 bitcoin halving is happening
other assets in the crypto space could weaken compared to bitcoin if it runs
china devalued currency is being sold for gold and bitcoin.

lending is decreasing and lenders become more risk averse which leads to debt service repayments become more difficult.
as debt service repayments decrease, credit-worthiness become more of a factor for net new loans.
the credit that is available in the currency decreases, therefore there is a lack of income to service existing debt payments.

weimar debt crisis was one of hyper-inflation, capital flight and inability to make reparation payments related to WW1. Re-alignment to gold backed franks.
in late 1920’s the margin loans for stock purchasing led to lenders and borrowers profiting on stocks.
in 2003-2009 the mortgage loans for home purchasing led to lenders and borrowers profiting on housing due to low credit or subprime loans + derivatives on the subprime loans.
in 2010-2020 the startup loans (special purpose vehicales) for equity purchasing led to lenders and borrowers profiting on private valuations, m&a, IPOs.

tech startups subsidized by multiple rounds of venture capital investment has fueled latest growth.
lenders were VCs / borrowers were startups.
higher up the chain could actually be more systemic
lender’s were LPs (university endowments, funds of funds) / borrowers were VC’s.

10 yr 30 yr yields go negative.
German, US,
Outstanding debt payments in the market
16 Trillion in Negative yield Debt

Returns from the investment in the token are the returned to LPs.

the devaluation of the chinese yuan and the dollar becoming more of a inflationary asset.
China has increased it’s middle class bringing close to 500 million people out of poverty over the last couple decades.
More visibility into Credit. Publisize credit ratings. Reputationq economy.
Economic production in Shenzen and Shanghai.

Flight asset via Real-Estate, Gold, Crypto-currencies
the student loans for degree / income purchasing led to lenders and borrowers profiting on college tuition.
the colleges leveraged the money to pour endowment funds as LPs into Venture Capital Funds which ultimately provided capital to startups which hired employees with college degrees.
however, while the cost of college tuition rised astronomically, the wages stayed relatively the same over the 20 year period.
college campuses are the largest centers for bitcoin mining due to the subsidized energy costs associated with room and board.

Effectively three different debt cycles (currently)

LP -> VC debt cycle
VC -> Startup debt cycle
University -> Student debt cycle

other possible bubbles, crypto-currency LPs subsidizing development of protocols that lack.

3 Types of Currency
Fiat based State Governed Currency IMF, Bank of London. USD
Corporate basket currency – Libra
Crypto-currencies – Bitcoin, Ethereum

Currency scope ie cafe in sf doesnt except euros, vice versa cafe in barcelona doesnt except us dollars.
digital realm / digital worlds / digital ecosystems can self organize around tokenized incentives.

Lenders and Borrowers still remain
Lenders can be protocol committees for funds that are subsidizing development by granting milestone based token/fiat grants to teams for development of features on their protocol
Borrowers can work in sprints, capitalize, allocate time and in return earn.
digital subsidies through crypto networks creating a shelling point and incentive development of open source work.
The money still needs to come from somewhere.
early investors have lower strike price on tokens while providing capital for developments and fund for ecosystem grants.
Borrowers are owners of open-source tokenized protocol that offer tokens.
Lenders invest in the token.

The biggest factor again becomes the debt denominated in one’s currency, in this case the dollar, navigating and creating policy to address an imbalance in credit/debt is more manageable.

Inter-firm Sales Automation

There is a repeatable insight that we have gathered over three years of working with Fortune 500’s on Digital Transformation projects; the next wave of enterprise software is driven through inter-firm sales automation leveraging shared global state, governance mechanisms and organizational incentives.

In every value-chain we have discovered that there is a need for standardization across companies related to revenue driving applications.

What is the state of the art today?

1) cloud SORs, paper contracts, legal liability, ultimately mitigate cp risk

2) internal process standardization, tooling, portals, systems, rpa

3) implemented workflows and automation to decrease cycle times

4) production / provisioning teams that reconcile changes (middle-back office)

5) Cost associated with unsolved discrepancies, resolution, time.

What does the future look like to us?

Blockchain based inter firm-automation of revenue cycle including CRM + revenue driving applications + AI/ML in a GDPR compliant manner whereby all participants leverage encrypted, automated real time data for improved workflow process without losing control.

This is the next version of b2b enterprise software.

The cloud axiom is evaporating. The state of the art today is standardization within one company leveraging cloud based saas applications. This leads to top down visibility, metrics and growth. The processes that have implemented within the customer’s instance can be surfaced via portals, communities for partner re-selling, on-boarding but there is still a disconnect in the value chain. The processes are available via the internet, in some cases via mobile device; however across companies they are siloed, and the data is duplicated, not replicated across the entire multi-tenant instance. Across companies there is opacity, there is latency and reconciliation because they are on separate customized instances of the same exact software from the same companies. This is in part because of a lack of trust ie the company is not going to open up their middle / back office system to another company. The second component it especially for enterprise customers is that their is a ton of custom logic and implementation at each company. How else could the customer actually have the product fit their requirements and solve a problem? But what if they could preserve the privacy of their most critical data while using the same exact processes distributed across their entire value chain with all their trade partners?

This type of technology platform could be applicable to a number of value chains that most of the time today are dependent on other stakeholder’s to drive sales automation. Inter-firm sales automation can be applied to:

Brokers and Providers

Retailers and Vendors

Agents and Universities

Vendor and Sub-Vendor

Contractor and Sub-Contractor

Manufacturer and Distributor

So how do we achieve this? How can multiple organizations operate off of the same processes and data as opposed to reconciling the differences in their in-house cloud based systems?

Top Down Approach

Does consortia work? There are a number of consortiums leveraging the state of the art in the industry but does it solve the human coordination problem? Does the technology itself create the incentive for multiple companies to collaborate together? What really is the incentive for an organization to allocate resources to re-developing mission critical revenue processes with other organizations? Let alone, how do you get everyone at the table committed to leveraging global state and shared processes? Coordination problems can be solved by bringing multiple companies into the same platform but what is the second and third level impact? Is it ultimately going to be making the company money or saving the company money?

These are types of coordination and incentive questions companies should be asking.

Bottom Up Approach

What about the bottom up approach? Build a state of the art network with a number of members who have a stake and incentives to govern and operate the network. Build the technology, create a standardized system, incentivize on-boarding, spread out the cost of operating the network and create distribution through the nodes. Discrete use case to start. Is the bottom approach of developing the technology platform first and then finding validators or participants more effective?

What about for Enterprise B2B Sales and Service Automation?

Is there actually an issue with Sales and Service Automation across the value chain. If you were to call the COO or CRO of any Fortune 500 today and ask, “Do you need a solution for standardizing your procurement processes across your network  of trade partners?”

Would they say it is a nice to have to a must have solution for their business? Does it impact the customer experience? Cost and time savings of customized implementations? How do we determine best-practice that is constantly evolving based upon the transactions that are coming into the network. Are we even able to evolve or upgrade the system over time?

What if all of the trade partners in a particular value chain leveraged the same application specific blockchain networks? Would this yield the same data and eliminate the reconciliation cost and latency related to siloed, customized, multi-tenant instances? Would there be benefits to multiple parties standardizing on a set of inter-firm business processes?

What are some of the challenges?

Standardization

Every company believes it has a unique, best practice for serving their customers and driving top-line growth. Standardization internally is difficult at companies as well! There is always the sacred excel sheet or process that rep cannot git rid of, yet, top down ops improvement and a ROI leads to adoption. Standardization across companies is even more difficult. Each company has their own set of processes and different levels of complexity. What are the incentives for an organization to re-architect their systems to a cross-tenancy, distributed standard? How do the organizations agree upon what is in-fact the best practice or the path to improving the network?

Privacy

How do we ensure that the data that is being created in the network is going to be compliant with privacy-preservation regulation such as GDPR and CCPA? How can a company ensure today the data mapping processes it has implemented to be compliant will be uniform across all of the processors of data in their value chain. Two examples of this are in the AI/ML and AdTech industry. How can a company be compliant when one customer ask for their data to be removed from the controller of the data. That customer profile has already been feed into the ML model and is being used for analytics not just within the company but has been sold downstream to other companies. Does the entire data-set need to be deleted and model retrained? In the AdTech scenario does the aggregation of the data need to recall the batch customer data from the Advertiser and re-sell them a new data set that is compliant based upon the one customer’s request?These are real, very difficult problems that can be solved by moving towards systems that have inherently built in the data-mapping controls to preserve privacy and move away from the cost of retrofitting existing cloud based systems of record across companies.

Exclusion

How does the network continue to operate when a node wants to leave? Do they have the processes and the data when they begin a net new network. In the Bitcoin and Ethereum Blockchains, nodes can leave at will and rejoin but with the pre-requisite that they have to download the entire chain since they left the network. There is in-fact the dissolution of the challenge with exclusion but for network that do not have to download the entire chain exclusion becomes a very real and difficult problem to address.

What are some of the limitations?

The limitations for the companies and users will be customized logic. This will need to be handle in a creative way for each company. While the main revenue driving processes are orchestrated and yield the same data, there will need to be some hybrid solutions.

What does the future look like to us?

The processes will be standardized across the entire value chain leveraging application specific decentralized blockchain networks where companies have stakes in the networks they govern and drive growth from. The data becomes owned by the customer and is in fact first party data permissioned and replicated to specific members of the network. There are cloud/blockchain hybrids for data mapping controllers and processors of client/customer data. The entire front-office, middle-office and back-office becomes an interoperable network of application specific blockchains. The counter-party risk associated with non-performance and non-payment are mitigated by programming the incentives into the network. Inter-firm Agreements, Payments, Files, Messaging are all part of the interoperable network. The same way to enterprise moved from on-premise to cloud based applications, they moved into application specific cloud/blockchain hybrid networks that are evolving over time. There is a cross-pollination at global scale for blockchain runtime modules, services and privacy preserving software.

The Rise of Rust and Blockchain

The Rust programming language is redefining the blockchain space very rapidly. It is my goal to learn this language through an evaluation of blockchain protocols, peer-to-peer networking layers and to understand how the concepts of ownership and modules will drive forward the industry as a whole.

This is a very, very good programming language for safety and performance.

At a high level I understand there are crates and cargo, analogous to npm and node for running packages of software written in Rust. The blockchain industry is increasingly demanding development in the Rust programming language. Here is my overview of Substrate and Libra and how to code is similar or dissimilar in a particular fashion: In learning more about Rust and WASM, WASI and Blockchains I have come to understand that the systems level programming or high performance blockchain will be running in the browser. The Rust WASM Blockchain combination enables powerful applications in the browser. Applications that today I don’t think one would want to have in the browser due to performance; but it will be a combination of having the React and Javascript interface with a WASM based blob that is binary from a program written in Rust. In learning more about Rust and specifically the cargo build system it becomes clear that the development experience will be to leverage the set of libraries across these different projects to effectively create a new ultra-performant enterprise platform that is written in rust and can run in the browser. This could be a number of enterprise applications that are actually using the browser to do the high performant computation and taking the most extensive components of the enterprise application and rewriting them so they can be a WASM blob and run in the browser on desktop and mobile

For an enterprise platform this becomes very important in that it is not just for system of record but for computational tasks such a a blockchain network or ledger across multiple network participants in the browser or for more computational advanced processes that need extra guarantees around security and scalability.

Repos

I am looking into these repos that have been written in Rust and comparing what I see across them:

Starting with Substrate and looking into the pool of the blockchain. The pool is the pool of incoming extrinsics; that are things such as transactions and inherents such as timestamps that are put into a mempool and propagated on the network.

The Rise of Substrate: A multi-blockchain universe

Substrate is the antithesis of a one distributed actor framework smart contract world computer. There will be thousands of application specific blockchains built on this framework. In this framework the blockchain is more of a living organism that is in constant change through an ongoing set of state transitions driven through extrinsics.

Substrate Specifications

Runtime architecture WebAssembly
Implementation language Rust
Component technologies provided with Substrate
Here are some of the technologies bundled with Substrate.

You can swap out any of these components for your own alternative:

Networking LibP2P
Consensus algorithm Hybrid PBFT/Aurand
Randomness beacon Collective coin flipping
Authentication algorithm Edwards-Curve Ed25519
Hashing function Blake2b
Address format Versioned Base-58 + Blake2b checksum

Here is a link to the Substrate Developer Hub

A few commands to get going:

substrate-node-new <node-name> <author>

To create a runtime module:

substrate-module-new <module-name>

First the RPCs –

Author

Hash, BlockHash

Substrate authoring RPC API

submit_extrinsic,
pending_extrinsic,
watch_extrinsic,
unwatch_extrinisic

new client: pool: subscriptions)

Chain

Substrate blockchain API

Hash,
Header,
Number,
SignedBlock

Relay Chain and the Canonical Chain

Subscribe and Unsubscribe -> Block heads

header, block, block_hash, finalised_head, subscribe_new_head, unsubscribe_new_head, subscribe_finalised_heads, unsubscribe_finalised_heads.

State

Substrate state API

Hash,

call

storage

storage_hash

storage_size

runtime_version

query_storage

State Subscribe and Unsubscribe -> Runtime Version

Subscribe and Unsubscribe -> Storage

call(method: String, data: Bytes, block:), storage(key: StorageKey, block:)
storage_hash(key: StorageKey, block:), 
storage_size(key: StorageKey, block:), metadata(block:), query_storage(key: StorageKey, from: Block::Hash, to:),
subscribe_storage, unsubscribe_storage, runtime_version, subscribe_runtime_version)

System

Substrate system API

system_name,
system_version,
system_chain,
system_properties,
system_health


Rust provides a Keystore and the runtime modules.
Abstract block format
crypto database agnostic

 

For interacting with the chain using the RPCs go to https://polkadot.js.org/apps/#/toolbox

 

– base-16 modified merkle trie (aka Ethereum) Patricia Merkle Tree – Sparse Merkle Trees
– Binary Merkle Trie

Wasm “execute_block” function
Extensible Networking, CLI, RPC
Roll your own Consensus
Blockchain PBFT
Probablistic Finality
Consensus API

What do I get with Substrate?

  • Shared ancestry finality tool
    grandpa
  • Hot-swapple, pluggable Consensus
  • Hot-upgradeable, pluggable
  • STFLight client
  • Chain synchronisation
  • Pub/Sub WebSocket JSON-RPC
  • Transaction queue
  • Pervasive, secure networking
  • JS implementation
  • Modular SRML if you want

Interchain connectivity via Polkadot
execute_block function

networking, block authoring and transaction queue
CORE SUBSTRATE

  • RPCs sync, databases, crypto, networking, storage
  • Telemetry
  • Light client
  • Change tracking
  • Pluggable consensus
  • Address formats
  • Low level JS utils

SRML SUBSTRATE

High level JS helpers
Front-end GUI infrastrucuture
Block authoring and transaction queue
JSON config
Chain explorer
event tracking

SOLO CHAIN | SOLO CHAIN + BRIDGE | PARACHAIN

Architected on industry-standard WebAssembly
Highly extensible Libp2p networking
Rust-based primary implementation for speed and reliability
Javascript secondary implementation for developability
Wasm WebAssembly interpreter, written in Rust

Substrate is a blockchain platform with a completely generic State Transition Function ( STF ) and modular components for consensus, networking and configuration.

Despite being “completely generic”, it comes with both standards and conventions (particularly with the Substrate Runtime Module Library ( SRML
) regarding the underlying data-structures which power the STF, thereby making rapid blockchain development a reality.

Each of these datatypes corresponds to a Rust trait. They are:

Hash, a type which encodes a cryptographic digest of some data. Typically just a 256-bit quantity.

BlockNumber, a type which encodes the total number of ancestors any valid block has. Typically a 32-bit quantity.

Digest, basically just a series of DigestItems, this encodes all information that is relevant for a light-client to have at hand within the block.

DigestItem, a type which must be able to encode one of a number of “hard-wired” alternatives relevant to consensus and change-tracking as well as any number of “soft-coded” variants, relevant to specific modules within the runtime.

Header, a type which is representative (cryptographically or otherwise) of all information relevant to a block. It includes the parent hash, the storage root and the extrinsics trie root, the digest and a block number.

Extrinsic, a type to represent a single piece of data external to the blockchain that is recognised by the blockchain. This typically involves one or more signatures, and some sort of encoded instruction (e.g. for transferring ownership of funds or calling into a smart contract).

Block, essentially just a combination of Header and a series of Extrinsics, together with a specification of the hashing algorithm to be used

impl system::Trait for Runtime {
/// The identifier used to distinguish between accounts.
type AccountId = AccountId;
/// The index type for storing how many extrinsics an account has signed.
type Index = Nonce;
/// The index type for blocks.
type BlockNumber = BlockNumber;
/// The type for hashing blocks and tries.
type Hash = Hash;
/// The hashing algorithm used.
type Hashing = BlakeTwo256;
/// The header digest type.
type Digest = generic::Digest<Log>;
/// The header type.
type Header = generic::Header<BlockNumber, BlakeTwo256, Log>;
/// The ubiquitous event type.
type Event = Event;
/// The ubiquitous log type.
type Log = Log;
/// The ubiquitous origin type.
type Origin = Origin;
}

 

Substrate Runtime Architecture

2019-08-26 18_50_37-Architecture of a Runtime · Substrate Developer Hub

 

Runtime Proxy Functions

Have to implement these three:

  • what version
  • what client
  • how to execute a block
// Implement our runtime API endpoints. This is just a bunch of proxying.


impl_runtime_apis! {

    impl runtime_api::Core<Block> for Runtime {

        fn version() -> RuntimeVersion {

            VERSION

        }

        fn execute_block(block: Block) {

            Executive::execute_block(block)

        }

        fn initialize_block(header: &<Block as BlockT>::Header) {

            Executive::initialize_block(header)

        }

        fn authorities() -> Vec<AuthorityId> {

            panic!("Deprecated, please use `AuthoritiesApi`.")

        }

    }

    impl runtime_api::Metadata<Block> for Runtime {

        fn metadata() -> OpaqueMetadata {

            Runtime::metadata().into()

        }

    }

    impl block_builder_api::BlockBuilder<Block> for Runtime {

        fn apply_extrinsic(extrinsic: <Block as BlockT>::Extrinsic) -> ApplyResult {

            Executive::apply_extrinsic(extrinsic)

        }

        fn finalize_block() -> <Block as BlockT>::Header {

            Executive::finalize_block()

        }

        fn inherent_extrinsics(data: InherentData) -> Vec<<Block as BlockT>::Extrinsic> {

            data.create_extrinsics()

        }

        fn check_inherents(block: Block, data: InherentData) -> CheckInherentsResult {

            data.check_extrinsics(&block)

        }

        fn random_seed() -> <Block as BlockT>::Hash {

            System::random_seed()

        }

    }

    impl runtime_api::TaggedTransactionQueue<Block> for Runtime {

        fn validate_transaction(tx: <Block as BlockT>::Extrinsic) -> TransactionValidity {

            Executive::validate_transaction(tx)

        }

    }

    impl consensus_aura::AuraApi<Block> for Runtime {

        fn slot_duration() -> u64 {

            Aura::slot_duration()

        }

    }

    impl offchain_primitives::OffchainWorkerApi<Block> for Runtime {

        fn offchain_worker(n: NumberFor<Block>) {

            Executive::offchain_worker(n)

        }

    }

    impl consensus_authorities::AuthoritiesApi<Block> for Runtime {

        fn authorities() -> Vec<AuthorityId> {

            Consensus::authorities()

        }

    }

}

Runtime Module Template

Module’s configutation trait, storage, declartion and event.

/// A runtime module template with necessary imports

/// Feel free to remove or edit this file as needed.

/// If you change the name of this file, make sure to update its references in runtime/src/lib.rs

/// If you remove this file, you can remove those references

/// For more guidance on Substrate modules, see the example module

/// https://github.com/paritytech/substrate/blob/master/srml/example/src/lib.rs

use support::{decl_module, decl_storage, decl_event, StorageValue, dispatch::Result};

use system::ensure_signed;

/// The module's configuration trait.

pub trait Trait: system::Trait {

    // TODO: Add other types and constants required configure this module.

    /// The overarching event type.

    type Event: From<Event<Self>> + Into<<Self as system::Trait>::Event>;

}

/// This module's storage items.

decl_storage! {

    trait Store for Module<T: Trait> as TemplateModule {

        // Just a dummy storage item.

        // Here we are declaring a StorageValue, `Something` as a Option<u32>

        // `get(something)` is the default getter which returns either the stored `u32` or `None` if nothing stored

        Something get(something): Option<u32>;

    }

}

decl_module! {

    /// The module declaration.

    pub struct Module<T: Trait> for enum Call where origin: T::Origin {

        // Initializing events

        // this is needed only if you are using events in your module

        fn deposit_event<T>() = default;

        // Just a dummy entry point.

        // function that can be called by the external world as an extrinsics call

        // takes a parameter of the type `AccountId`, stores it and emits an event

        pub fn do_something(origin, something: u32) -> Result {

            // TODO: You only need this if you want to check it was signed.

            let who = ensure_signed(origin)?;

            // TODO: Code to execute when something calls this.

            // For example: the following line stores the passed in u32 in the storage

            <Something<T>>::put(something);

            // here we are raising the Something event

            Self::deposit_event(RawEvent::SomethingStored(something, who));

            Ok(())

        }

    }

}

decl_event!(

    pub enum Event<T> where AccountId = <T as system::Trait>::AccountId {

        // Just a dummy event.

        // Event `Something` is declared with a parameter of the type `u32` and `AccountId`

        // To emit this event, we call the deposit function, from our runtime funtions

        SomethingStored(u32, AccountId),

    }

);

Runtime Module Key Value Example

This is simple runtime module that stores a key value map.

use srml_support::{StorageMap, dispatch::Result};

pub trait Trait: system::Trait {}

decl_module! {

    pub struct Module<T: Trait> for enum Call where origin: T::Origin {

        fn set_mapping(_origin, key: u32, value: u32) -> Result {

            <Value<T>>::insert(key, value);

            Ok(())

        }

    }

}

decl_storage! {

    trait Store for Module<T: Trait> as RuntimeExampleStorage {

        Value: map u32 => u32;

    }

}

Libra

Networking level and substrate based on libp2p core.

Substrate based on:

use libp2p::core::

Libra based on:

parity_multiaddr::Multiaddr;
fn listen_on(&self, addr: Multiaddr) -> Result<(Self::Listener, Multiaddr)

– chain level

I was able to get Libra working on my Linux Subsystem using the following commands:

git clone https://github.com/libra/libra

cd libra

./scripts/dev_setup.sh

source /home/<user>/.cargo/env

./scripts/cli/start_cli_testnet.sh

Libra Blockchain API

account_state_sets

fn put_account_state_set(
    store: &StateStore,
    account_state_set: Vec<(AccountAddress, AccountStateBlob)>,
    version: Version,
    root_hash: HashValue,
    expected_nodes_created: usize,
    expected_nodes_retired: usize,
    expected_blobs_retired: usize,
) -> HashValue

use rocksdb

pub type Read Options = rocksdb::ReadOptions;


A simple tx script in Move IR that transfers a coin from address to another:


public main(payee: address, amount: u64) {
  let coin: 0x0.Currency.Coin = 0x0
    .Currency
    .withdraw_from_sender(copy(amount));

  0x0.Currency.deposit(copy(payee), move(coin));
}

Oasis: Blockchain and WASM

By leveraging compiler support and tools built for Wasm and WASI, the blockchain becomes a powerful tool for high-integrity — and even confidential — general-purpose “cloud” computation.

use oasis_std::Context;
use statesets_std::Context;

#[derive(oasis_std::Service)]
struct Quickstart;
impl Quickstart { 
pub fn new(_ctx: &Context) -> Self {

Self

}

pub fn say_hello(&mut self, ctx: &Context) -> String {

format!("Hello, {}!", ctx.sender())

}

}

fn main() {

oasis_std::service!(Quickstart);

}

#[cfg(test)]

mod tests {

extern crate oasis_test;

use super::*;

#[test]

fn test() {

let sender = oasis_test::create_account(1);

let ctx = Context::default().with_sender(sender);

let mut client = Quickstart::new(&ctx);

println!("{}", client.say_hello(&ctx));

}

}

From https://medium.com/oasislabs/blockchain-flavored-wasi-50e3612b8eba:

Notes on struct, impl, and pub fn.

Becomes one main.rs file as opposed to separated. The compiler checks and the gives wasm blog which is deployed on chain with rpc endpoints getting, changing and setting states.

define and implement Oasis service RPCs in Rust.

fn main() { oasis_std::service!(Ballot); }

Compiles to WASM. Deploy to the platform and setup client to call it.

use oasis_std::Context;

use map_vec::Map; use oasis_std::{Address, Context};

[derive(oasis_std::Service)]

derive [Serialize, Deserialize, Service]

pub struct X {

fields: <String>,

''

}
pub fn new(ctx: &Context, description: String, candidates: Vec<String>) -> Self { Ok(Self { description, tally: vec![0; candidates.len()], candidates, accepting_votes: true, admin: ctx.sender(), voters: Map::new(), }) }

/// Returns the candidates being voted upon.

pub fn candidates(&self, _ctx: &Context) -> Vec<&str> { self.candidates.iter().map(String::as_ref).collect() }

/// Returns the description of this ballot.

pub fn description(&self, _ctx: &Context) -> &str { &self.description }

/// Returns whether voting is still open. pub fn voting_open(&self, _ctx: &Context) -> bool { self.accepting_votes }

we have access to the state of the service, as provided by a reference to self.

For state changes:

you’ll see that &self has changed to &mut self, but this is just Rust’s way to know that you want a mutable reference.

Then define Getter Fns to Get State from the Stores

similar to the hyperledger composer model

define a data model in cto file declare assets, transactions and events

define javascript functions for logic with refernece to namespace is same as impl with struct (service state object) RPCs that are defined as pub fn’s.

instead of seperate REST API to make calls to on the model.

A service is created on chain with RPC endpoints.

Clients can call the service endpoint and add listeners for events from the service.

Blockchain WASI

While cloud computing has long brought cost and ease of use, switching from an on-prem solution to cloud has traditionally come with its own inherent risks including a degradation in security and a lack of auditability. These are areas that have the potential to be solved with new emerging technologies including Web Assembly, the Web Assembly System Interface, and blockchain.

We propose a mechanism for trustworthy, uncensorable, and autonomous cloud computation based on the combination of three emerging technologies: Web Assembly, the Web Assembly System Interface, and blockchain.

Oasis Network

The Oasis Network uses 3 main protocols for communication:

Confidentiality is achieved in the Oasis Network by relying on trusted execution environments (TEEs) to secure the execution of any given smart contract. Initially, the Oasis Network will utilize Intel SGX. As more TEE technologies mature, we expect to support more than TEEs than Intel SGX.

from https://docs.oasis.dev/operators/architectural-overview.html#modular-architecture

libp2p

lib-p2p is a modularized and extensible network stack to overcome the networking challenges faced when doing peer-to-peer applications. It is very well defined, composable, swappable; a modular system of protocols, specifications, and libraries that enable the development of peer-to-peer network applications.

It ultimately is a collection of peer-to-peer protocols for finding peers, connecting to them for finding content, and transferring it to them.

Here is a sample ping example from rust-libp2p:

use futures::{prelude::*, future};

use libp2p::{ identity, PeerId, ping::{Ping, PingConfig}, Swarm };

use std::env;

fn main() {

    env_logger::init();

    // Create a random PeerId.

    let id_keys = identity::Keypair::generate_ed25519();

    let peer_id = PeerId::from(id_keys.public());

    println!("Local peer id: {:?}", peer_id);

    // Create a transport.

    let transport = libp2p::build_development_transport(id_keys);

    // Create a ping network behaviour.

    //

    // For illustrative purposes, the ping protocol is configured to

    // keep the connection alive, so a continuous sequence of pings

    // can be observed.

    let behaviour = Ping::new(PingConfig::new().with_keep_alive(true));

    // Create a Swarm that establishes connections through the given transport

    // and applies the ping behaviour on each connection.

    let mut swarm = Swarm::new(transport, behaviour, peer_id);

    // Dial the peer identified by the multi-address given as the second

    // command-line argument, if any.

    if let Some(addr) = env::args().nth(1) {

        let remote_addr = addr.clone();

        match addr.parse() {

            Ok(remote) => {

                match Swarm::dial_addr(&mut swarm, remote) {

                    Ok(()) => println!("Dialed {:?}", remote_addr),

                    Err(e) => println!("Dialing {:?} failed with: {:?}", remote_addr, e)

                }

            },

            Err(err) => println!("Failed to parse address to dial: {:?}", err),

        }

    }

    // Tell the swarm to listen on all interfaces and a random, OS-assigned port.

    Swarm::listen_on(&mut swarm, "/ip4/0.0.0.0/tcp/0".parse().unwrap()).unwrap();

    // Use tokio to drive the `Swarm`.

    let mut listening = false;

    tokio::run(future::poll_fn(move || -> Result<_, ()> {

        loop {

            match swarm.poll().expect("Error while polling swarm") {

                Async::Ready(Some(e)) => println!("{:?}", e),

                Async::Ready(None) | Async::NotReady => {

                    if !listening {

                        if let Some(a) = Swarm::listeners(&swarm).next() {

                            println!("Listening on {:?}", a);

                            listening = true;

                        }

                    }

                    return Ok(Async::NotReady)

                }

            }

        }

    }));

}

On Libra

Facebook announced it’s open-source Libra Blockchain and cryptocurrency today in it’s efforts to create a global financial transaction system. This is a game changer. One of the largest companies in the world just launched their own cryptocurrency and it will not be the last. Facebook is taking advantage of a decade of innovation and built on the learnings and innovation of other crypto protocols for state transitions, networking and consensus. It also comes without a lot of the technical debt that other projects have. It is a public yet centralized cryptocurrency and from a regulatory standpoint it is yet to be determined how regulators and central banks react to the new financial protocol.

Libra’s mission is to enable a simple global currency and financial infrastructure that empowers billions of people.

The Libra protocol is a deterministic state machine that stores data in a versioned database.

Libra Stablecoin

Libra (LBR) is a stablecoin and uses this account based model similar to Ethereum for payments and gas fees across a number of validators maintaining the ledger. The ledger state, or global state of the Libra Blockchain, is comprised of the state of all accounts in the blockchain. Libra Core is a permissioned blockchain network with a set of validators hosting replicas of the network consisting of financial institutions, software incumbents and others. It is backed by a reserve of assets and it is governed by the independent Libra Association. To be a validator node for the Libra Association, it is a requirement to invest in the reserve. They have raised $100 million USD.

LibraCore

The system consist of a Blockchain VM called Libra Core written in Rust. It has very similar account model and transaction system as Ethereum while maintaining a modular and type safe smart contract language called Move. Lastly, it has it’s own consensus algorithm called LibraBFT.

2019-06-18 11_57_55-Network · Libra

It uses 32byte keys for account keys and it is using SHA3-256 for the main hash function. For the networking component it is using a combination of technologies in it’s own version similar to libp2p:

  • Multiaddr scheme for peer addressing.
  • TCP for reliable transport.
  • Noise for authentication and full end-to-end encryption.
  • Yamux for multiplexing substreams over a single connection; and
  • Push-style gossip for peer discovery.

They use a sparse Merkle tree that represents ledger state and the storage module uses RocksDB as its physical storage engine.

LibraBFT Consensus

LibraBFT is a consensus algorithm that is based on HotStuff, of the same family as Tendermint and PBFT.

In LibraBFT, validators receive transactions from clients and share them with each other through a shared mempool protocol. The LibraBFT protocol then proceeds in a sequence of rounds. In each round, a validator takes the role of leader and proposes a block of transactions to extend a certified sequence of blocks (see quorum certificates below) that contain the full previous transaction history. A validator receives the proposed block and checks their voting rules to determine if it should vote for certifying this block. These simple rules ensure the safety of LibraBFT — and their implementation can be cleanly separated and audited. If the validator intends to vote for this block, it executes the block’s transactions speculatively and without external effect. This results in the computation of an authenticator for the database that results from the execution of the block. The validator then sends a signed vote for the block and the database authenticator to the leader. The leader gathers these votes to form a quorum certificate that provides evidence of \ge 2f + 1 votes for this block and broadcasts the quorum certificate to all validators.

Move

Move is similar to an adaptation of Rust and C++ move. It is a scripting language for transactions and modules that is used by clients to make updates to the key-value ledger state. It is also to be used for governance and validator membership.

The MoveVM is a stack machine with a static type system.

0x0.LibraAccount

0x0.LibraCoin

Are examples of Modules used for transactions in the example below:

// Multiple payee example. This is written in a slightly verbose way to
// emphasize the ability to split a `LibraCoin.T` resource. The more concise
// way would be to use multiple calls to `LibraAccount.withdraw_from_sender`.

import 0x0.LibraAccount;
import 0x0.LibraCoin;
main(payee1: address, amount1: u64, payee2: address, amount2: u64) {
  let coin1: R#LibraCoin.T;
  let coin2: R#LibraCoin.T;
  let total: u64;

  total = move(amount1) + copy(amount2);
  coin1 = LibraAccount.withdraw_from_sender(move(total));
  // This mutates `coin1`, which now has value `amount1`.
  // `coin2` has value `amount2`.
  coin2 = LibraCoin.withdraw(&mut coin1, move(amount2));

  // Perform the payments
  LibraAccount.deposit(move(payee1), move(coin1));
  LibraAccount.deposit(move(payee2), move(coin2));
  return;
}

Interestingly they don’t necessarily define the scripting for assets but rather for resources.

Move internalizes the idea of memory ownership and borrowing, very similar to how Rust operates. However, the novelty of the Move language is the way in which a ‘resource’ is defined.

A ‘resource’ is a structure datatype that utilizes the ownership model, but can never be copied only moved and borrowed.

Distribution

Facebook has distribution. The best distribution in the world due to it’s ownership of messaging applications, Oculus, social media platforms. Billions of users. The blockchain and cryptocurrency race has just started.

For more information:

https://developers.libra.org/docs/welcome-to-libra

Web3: Beyond Cloud Computing

Blockchain computing will make your business more competitive than Cloud computing.

Do you believe me? If so, let me take you on a journey to the future.

In web3, decentralized global organizations transact assets and agreements on universal settlement layers. Systems for automating and transacting assets through Front-Office, Middle-Office and Back-Office Applications are networked, orchestrated and verifiable across multiple distrusting organizations. Transactions are real-time and settle instantly across different organizations across the globe. Enterprise applications preserve the privacy of customers. Customer data is not used in machine learning models without consent. Agreements are auto-executing, competitive and codified with external stakeholders. Auto-executing decentralized public blockchains are deterministic and provide finality through state-of-the-art consensus algorithms. A network of blockchains will serve as the next-generation of the internet through digitally native enterprise agreements and tokenized assets. There will be thousands of interoperable blockchain networks, millions of tokenized assets and these networks have immutable liquidity through traversable enterprise merkle trees. This is the future of the enterprise software industry, a decentralized system of agreement between global stakeholders.

2019-04-29 19_20_17-Dapps Inc. DSOA Pitch - Google Slides

Today in web2, we can see the state of the objects and entities organizations use to manage sales, subscriptions, leads, accounts; without the manual updates, growth critical assets are all stale. Information and communication via mobile and web-based applications drive growth. Tools are used to synthesize the data points. I spent the last 7 years working in these systems of records. Building and configuring applications that implemented workflows and processes, custom user interfaces, even bots to abstract away the pain with updating SORs manually.  Importing and spinning instances, installing package and modules, lots of missed checkboxes, customized demos in cloud silos. I have learned that most of these applications result in a paper deliverable. Any steps to automate the rules of generation and dynamically merge fields or streamline signatures leads to a sales tool. There is a consolidation of cloud storage and the many applications used to patch together these siloes as they have emerged. The state of the art today between counterparties is applications within Slack, gMail and Salesforce; and of-course Word, Powerpoint and Excel, the former residing in data centers on Oracle clusters.

Web3 is bringing the Enterprise and Consumers into a world that is:

  • Open
  • Secure
  • Trusted
  • Privacy-Preserving
  • Direct
  • Verifiable
  • Honest
  • Immutable
  • Decentralized
  • Global
  • Real-Time
  • Auto-executing
  • Liquid
  • Consistent
  • Verifiable
  • Deterministic
  • Consensus Driven

To bring about:

  • Eliminating Systemic Mistrust across Counter-parties
  • Knowledge and Wealth Distribution
  • Verifiable Digital Agreements and Digital Assets
  • Data Ownership and Security
  • Open-source, blockchain based, tokenized systems
  • Self-organizing groups and incentive systems
  • Global enterprise business networks and asset ledgers
  • Tokenized Asset Platforms and Markets for Contracts and Assets
  • Unstoppable, Decentralized Global Settlement Layers
  • Data Commons, Open Verifiable Data Exchanges

In response to a web2.0 that is comprised of:

  • Bought and Sold Customer Data
  • No Trust
  • No Accountability
  • Not Verifiable
  • Evasive
  • Siloed
  • Manual
  • Copied
  • Risk
  • Customized
  • Data owned
  • Not persistent
  • Temporary
  • Outsourced
  • Controlled by a few
  • Multi-tenant
  • Rented
  • Expensive
  • Closed

The mediums we use to communicate, to transact, to agree; are controlled by intermediaries who charge us for our use and then sell our data. The enterprise applications are siloed, failing systems of record and unimpressive systems of intelligence. The processes we use in applications are engineered to make them more addictive. The shift is happening into web3 and there is a network of people working on recreating an open web, open financial system and decentralized system of agreement for the world.

 

Serverless Containerization

Serverless Containerization

Kubernetes and KNative are driving sequential, scalable and containerized build systems scoped within pods for state sharing.  Knative is a set of open-source components and platform level custom APIs installed on Kubernetes.

In the next few minutes I will walk through how to containerize a node.js or spring boot application, deploy it to Google Kubernetes Engine and use the KNative APIs to manage the containers from 0 to infiinte.

  1. Create an account on Google Cloud Platform (GCP) and on the left hand side go to Kubernetes Engine
  2. Create a new cluster with 3 pods and Connect to the Cluster
  3. Create Dockerfile for your application either with node.js:
  4. Or a spring boot application:
  5. docker build <NAME> gcr.io/<PROJECTNAME>/<NAME>:TAG
  6. docker push gcr.io/<PROJECTNAME>/<NAME>TAG
  7. We’ll assume that you built the image to io/${PROJECT_ID}/name:lastestand you’ve created the Kubernetes cluster as described above.
  8. Once connected in the cluster you can run kubectl to interact with Kubernetes.

Intro Kubernetes Engine Commands & General K8 Deployment Steps

  1. Containerize Application (spring boot, express server for example) with Dockerfile
    1. Docker Tag
    2. Docker Push
  2. Push the tagged container to gcr.io on GCP
  3. kubectl run <name> --image=gcr.io/project/name:tag
  4. kubectl get pods
  5. kubectl expose deployment name –type=LoadBalance –port 80 – target-port 8080
  6. Move the external IP to a domain in the DNS Settings
  7. kubectl scale deployment name
  8. kubectl get pods
  9. kubectl delete service name
  10. kubectl delete name-cluster

Knative adds an abstraction layer for the orchestration process using some key conceptions around revisions, rollouts, and templates.

Knative makes it possible to:

  1. Deploy and serve applications with a higher-level and easier to understand API. These applications automatically scale from zero-to-N, and back to zero, based on requests.
  2. Build and package your application code inside the cluster.
  3. Deliver events to your application. You can define custom event sources and declare subscriptions between event buses and your applications.

This is why, Knative provides developer experiences similar to serverless platforms. The KNative API builds images inside Kubernetes Pods and here is the KNative API:

Service: describe and application on KNative

Configuration: creates a new revision when the revisionTemplate field changes

Route: configures how traffic should be split between revisions.

Revision: Read-Only snapshot of an application image and settings

Rollout Percent: what % of the traffic the candidate revision gets.

 

Build Template:

Build: Declare an ordered set of build steps

ClusterBuildTemplate:

istio-ingressgateway – service mesh gateway

kubectl  – Kubernetes Command

kubectl get ksvc (knative service)

 

Here are the steps to deploy a containerized application on GKE using Kubernetes and Knative:

A production deployment comprises two parts: your Docker container, and a front-end load balancer (which also provides a public IP address.)

THIS.

Once this clicked for me Kubernetes made a lot more sense, it will continue to become more clear but once I figured out that there is a public IP address linked to the set of containers it became much easier to reason about (similar to external IP on a VM, but instead its for the containerized app).

 

Helm is a package manager for Kubernetes Applications similar to NPM for Node.js

We can also use Helm to bring in the application to the Cluster.

 

Helm now has an installer script that will automatically grab the latest version of the Helm client and install it locally.

You can fetch that script, and then execute it locally. It’s well documented so that you can read through it and understand what it is doing before you run it.

$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh$ chmod 700 get_helm.sh$ ./get_helm.sh

Helm initkube

kubectl create serviceaccount --namespace kube-system tiller

kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

 

Create Dockerfile for your application either with node.js:

Or a spring boot application:

docker build <NAME> gcr.io/<PROJECTNAME>/<NAME>:TAG

docker push gcr.io/<PROJECTNAME>/<NAME>TAG

We’ll assume that you built the image to gcr.io/${PROJECT_ID}/name:lastest and you’ve created the Kubernetes cluster as described above.

Once connected in the cluster you can run kubectl to interact with Kubernetes.

Create a deployment:

kubectl run name --image=gcr.io/${PROJECT_ID}/name:v1 --port 8080

This runs your image on a Kubernetes pod, which is the deployable unit in Kubernetes.

The pod opens port 8080, which is the port your Spring Boot application is listening on.

You can view the running pods using:

kubectl get pods

  1. Expose the application by creating a load balancer pointing at your pod:
kubectl expose deployment demo --type=LoadBalancer --port 80 --target-port 8080

This creates a service resource pointing at your running pod. It listens on the standard HTTP port 80, and proxies back to your pod on port 8080.

  1. Obtain the IP address of the service by running:
kubectl get service

Initially, the external IP field will be pending while Kubernetes Engine procures an IP address for you.

If you rerun the kubectl get servicecommand repeatedly, eventually the IP address will appear.

 

You can then point your browser at that URL to view the running application.

Congratulations! Your application is now up and running!

Now we are going to add KNative to the cluster

kubectl create clusterrolebinding cluster-admin-binding \

  --clusterrole=cluster-admin \

  --user=$(gcloud config get-value core/account)

 

You can read the documentation at https://github.com/knative/docs.

Knative is still Kubernetes

If you deployed applications with Kubernetes before, Knative will feel familiar to you. You will still write YAML manifest files and deploy container images on a Kubernetes cluster.

Knative APIs

Kubernetes offers a feature called Custom Resource Definitions (CRDs). With CRDs, third party Kubernetes controllers like Istio or Knative can install more APIs into Kubernetes.

Knative installs of three families of custom resource APIs:

  • Knative Serving: Set of APIs that help you host applications that serve traffic. Provides features like custom routing and autoscaling.
  • Knative Build: Set of APIs that allow you to execute builds (arbitrary transformations on source code) inside the cluster. For example, you can use Knative Build to compile an app into a container image, then push the image to a registry.
  • Knative Eventing: Set of APIs that let you declare event sources and event delivery to your applications. (Not covered in this codelab due to time constraints.)

Together, Knative Serving, Build and Eventing APIs provide a common set of middleware for Kubernetes applications. We will use these APIs to run build and run applications.

  1. Install Istio: Knative uses Istio for configuring networking and using request-based routing.
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.4.0/istio-crds.yaml && \ kubectl apply --filename https://github.com/knative/serving/releases/download/v0.4.0/istio.yaml

 

  1. Label the default namespace with istio-injection=enabled:
kubectl label namespace default istio-injection=enabled
  1. Monitor the Istio components until all of the components show a STATUS of Running or Completed:
kubectl get pods --namespace istio-system

 

It will take a few minutes for all the components to be up and running; you can rerun the command to see the current state.

kubectl delete svc istio-ingressgateway -n istio-system

kubectl delete deploy istio-ingressgateway -n istio-system


  1. Activate Istio on the “default” Kubernetes namespace: This automatically injects an Istio proxy sidecar container to all pods deployed to the “default” namespace
kubectl label namespace default istio-injection=enabled
  1. Wait until Istio installation is complete (all pods become “Running” or “Completed”).
kubectl get pods --namespace=istio-system

Install Knative Build & Serving:

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml

https://github.com/knative/serving/releases/download/v0.4.0/serving.yaml

https://github.com/knative/build/releases/download/v0.4.0/build.yaml

Wait until Knative Serving & Build installation is complete (all pods become “Running” or “Completed”), run these a few times:

kubectl get pods --namespace=knative-serving

kubectl get pods --namespace=knative-build

Knative is now installed on your cluster!

Create your service.yaml file and deploy it using the following:

Deploy it:

kubectl apply -f service.yaml

Verify it’s deployed by querying “ksvc” (Knative Service) objects:

kubectl get ksvc

 

For more information check out the following articles:

https://cloud.google.com/community/tutorials/kotlin-springboot-container-engine

https://github.com/knative/

https://github.com/knative/build-templates

On ZKPs, STARKS, SNARKS

General information on the topic of ZKPs, not necessarily structured but should be ongoing article as I continue to learn about this technology. These are just notes specifically from meetup on ZKPs.

Honest computation through Zero Knowledge Proofs is a set of steps. It is this complex math and application of the cryptographic primitives that are ultimately provide proofs. Forms of Zero-Knowledge proofs provide post-quantum secure, scalablity and privacy improvements to blockchain based, game theory driven software applications that can provide an honest deduction of the steps taken to reach a provable outcome.

There are three sets of zero knowledge proofs and their application are becoming more implemented for scaling and batching transactions sets:

  • SNARKS
  • STARKS
  • BULLETPROOFS

SNARKS

f(x) = y

Suppose that you have a (public) function f, a (private) input x and a (public) output y. You want to prove that you know an x such that f(x) = y, without revealing what x is. Furthermore, for the proof to be succinct, you want it to be verifiable much more quickly than computing f itself.

Snarkjs – Snark JavaScript

Circom – DML Language for creating circuits

Bellman – Rust Implementation

ZoKrates

ZKP Steps

  1. Trusted Setup
  2. Challenge
  3. Proof
  4. Verify

STARKS – STARKs (“Scalable Transparent ARgument of Knowledge” are a technique for creating a proof that f(x)=y where f may potentially take a very long time to calculate, but where the proof can be verified very quickly. With the T standing for “transparent”, ZK-STARKs resolve one of the primary weaknesses of ZK-SNARKs, its reliance on a “trusted setup”.

F(X, Y) = Z

F Any Function

X Public Input

Y Private Input

Y Public Outputs

BULLETPROOFS

BulletProofs(Range Proofs)

any number can be represented as inner product of two vectors

need at least 3 bits to represent 5 n has to be at least 3

That assignment of a is correct.

I am going to create a simple Zero Knowledge Proof Generator that can be spun up as a docker container and run as a serverless container.

honest multiparty computation across stakeholders

merkle path to the cover transactions entire blockchain

SNARK 288 bytes STARKS

ZKPs uses SNARKs

compress the blockchain using

Bulletproofs are used for proofs in monero

batching transactions using ZeroKnowledgeProofs

SNARKs

STARKs

Bulletpoorf

Bellman Rust Implementation

Circom (Iden3s)

Modular Math

diffie hellman

p=23(modulus)g=5(base)

RSA Choose 2 primes p-3 q -5

n=p*q=15

g is a number generator

wants to prove that she knows x, such that u y = g of x

and sends t that t = g of v

bob pics random c send to

ECDHE for https://

Abelian Group Properties for a set G

ECC – Multiplication

Logarithm Problem

ECC – Finate Fields & Discrete Logs

P+P = ? in a finite field

ECDSA signature

multiple private key by generator point and get the point

Pedersen Commitment basis for Grin

you could user Q = vG to “hash” or hide amounts per tx

Com(v) = vG+bH

Com(v1) = v1G+b1H

Com(v2)=v2G+b2H

STARKS

SNARKS – QAP

The are many variants of SNARKs

QAP variant(quadratic arithmetic program):

Computation -> Arithmetic Circuit -> R1CS

x + y + 4 = 10

without telling x or y

x*y

x*y=4

left input, right input and output.

Lagrade

L*R

L(x)*R(x) = O(x) for x in {1,2}

The prover would send evaluations of x for L,R,), and H polynomials

But how would we ensure that

1) This “x” is hidden

2) Prover actually uses its polynomials

doing operations on the encrypted number

challenger to send a challenge encrypted

use a polynomial to answer back.

Who knows what s was in plaintext

setup for this challenge that is encrypted and hardcoded in the entire system

How can we evaluate a value that is encrypted

Adding zero-knowledge:

blinding factors

Setup

Prover has L, R O and H polynomials

Prover sends E(L(s)), E(R(s)), E(O(s)),E(H(s))

Anyone in the network can verify it

Pairing of Elliptic Curves for the Verified

This went over the Pinnochio

SNARKs –

  • Research in pairings
  • Research in optionality of elliptic curves
  • Lattice-based SNARKs
  • Tokenized World Scenario
  • How do we scale this.

Scaling Blockchains

Layer 1

  • Polkadot
  • DFinity
  • Cosmos
  • Sharding
  • Pruning
  • PoS
  • Casper
  • State Rent
  • WASM WASM as core for scaling.
  • ZKPs

Layer 2

  • DApps
  • Plasma Cash
  • Ignis
  • RollUp

Layer 2 Scaling

  • Data on Chain
  • compression
  • Data Off cain
  • merkletrees
  • accumulators
  • vector commitments
  • ZKPs
  • Honest Proofs
  • signature aggregation
  • Fraud Proofs

Off-chain ZKPs

Honest ZKPs

STARK Batching

F(X, Y) = Z

F Any Function

X Public Input

 

More information:

starkware

snarkjs

circom DSL

zokrates

2019

This year went by extremely fast. I learned a ton about starting a business, traveled around the world, built on new blockchain protocols, went and saw the future and came back to work on that which I believe shaped it in the present.

There is the confluence of technology that is happening before our eyes, we’re accelerating towards it. We are all so immersed in our phones; constantly updating a collective of which we are all aspirants of. Though the pendulum is shifting in a way to give the creators back the future. It will be between the technology incumbents and those with the ability to harness the gifts in applied cryptography and distributed systems. Technology is all around us and all encompassing. There are so many new languages, new frameworks being developed, new platforms for application development, new approaches; superpowers being given to us in a golden age of technological development. The level of control that one has at their finger tips to provision, automate and control computational resources is baffling. The level of control that one has at their figure tips with sound money not bound to a particular state is baffling. With one keyboard and a monitor you can patch into the world’s computing power; for application development and for transferring ownership of value. You can leverage the compute of the giants or build on the new computing paradigm. The future of enterprise software is built on next generation, traversable, merkalised data structures.

This past year I learned:

  • How to deploy infrastructure using Terraform, Kubernetes and Docker Containers
  • How to build Corda Blockchain Applications
  • How to write external data source connectors and ws event-driven middleware

This is going into the 7th year writing on this blog, I am over 92,000 views all time.

2018-12-26 12_14_28-Stats ‹ domsteil — WordPress.com

Hit just over 20,000 views this year.

2018-12-26 12_18_48-Stats ‹ domsteil — WordPress.com

Some more stats:

2018-12-26 12_18_00-Settings

Here are the past couple years of this same blog posts.

2013: A Year in Review

2014

2015

2016

2017

2018

Here were some of my goals from 2018:

Focus in 2018

  • Build Dapps
  • Build with ReasonML and ReactJS
  • Work with 200 Customers to start building Dapps
  • Work with people passionate about Crypto/Blockchain + Salesforce
  • Build a great team
  • Go to China First Program
  • Learn Mandarin Every Day on my phone and read the phrase book
  • Run 3 Miles and lift every morning
  • Build something incredible with IPFS
  • Build Dapps NLP Engine from scratch with Prolog
  • Build Lisk Apps and deploy from Salesforce
  • Keep a daily journal and a physical calendar
  • Say my autosuggestion every morning
  • Run, and read everyday
  • Get 150K views on the site
  • Eat Healthy: Chicken, Vegetables, Almonds, Salmon, Brown Rice, Egg Whites, Protein Powder, Tuna, Oatmeal, Almond Milk
  • Drink more water

Some of the Highlights from 2018

  • Salesforce World Tour Demos in Paris and Hanover
  • CordaCon in London
  • Web3 Conference in Berlin
  • ETH SF Hackathon and built GenomeCDP
  • NYC
  • Filed my (own) first provisional patent

build, keep, learn, write, read.

Focus for 2019

  • Build Dapps
  • Learn Rust
  • Hack on Substrate
  • Run every day and lift every day
  • Genomic Health: Eggs, Avocados, Chicken, Steak, Red Inca Quinoa, Black Beans, Vegetables, Steelcut Oatmeal
  • Learn Mandarin every day instead of social media
  • Get 200k views on the blog
  • Deploy all node infra with Terraform and K8s to dapps.network
  • Build out engineering team in Bay Area
  • Learn more about STARKS / ZKPs (Zero Knowledge Proofs)
  • Build more CorDapps and become expert at writing flows
  • Travel to Japan for Customer / Business
  • Keep a daily journal and a physical calendar
  • Say my autosuggestion every morning
  • Relaunch all websites using Next.js
  • Get 8 hours of sleep
  • Code every single morning, first thing in the morning
  • Drink more water

This year I spent a lot of my time traveling and with that I read and reread some great books:

  • Principles – Dalio
  • Why We Sleep – Walker
  • The Rust Programming Language
  • Ready Player One – Clide
  • Life After Google – Gilder
  • Docker Containers
  • Microservices
  • CryptoAssets
  • Originals
  • An Eassay on the Psychology of Invention in the Mathematical Field – Hadamard
  • Shoe Dog – Knight
  • The Truth Machine – Casey
  • Superintelligence – Bostrom
  • The Nature of Space and Time – Hawkins

Interoperability

There is an underlying meta surface that is being built that not only acts as a way of intercommunication between application specific blockchain but provides the ability to stand up ones own network, define functionality at a blockchain runtime level and change on the fly these networks with the configurable modules. This is powered by the Substrate blockchain framework.

There will be a network of interoperable global blockchain networks that are transferring value leveraging a number of methods to cryptographically check the validity of transfers using contract mechanisms. There is a protocol for reference for multiple blockchain systems that leverage Bitcoin and Ethereum like transaction order and verification mechanisms

I believe that the opportunity for redesigning the next generation of enterprise software is embedded within this confluence of factors that lead to the following value and attributed that are either nonexistence or as adequate as done on blockchain:

The blockchains of the future will be built upon:

  • UTXO Transaction Model and Signatures
  • Proof-of-Authority (POA) / Proof-of-Stake(POS)
  • Aura / Avalanche Consensus Mechanisms
  • SNARGS / Privacy Computation / Zero Knowledge Proofs
  • Formally Verifable Contracts and Agents
  • Commitees / Validator Srts / Routings / Attestations / Digital Jury
  • Shared sets of compute / storage / consensus
  • TEEs / Secure Enclaves / encryption
  • Beacon Chains / Randomness / threshold relays
  • IBC / Multiporocol / Extrinisics / intrinsics/ inhereners
  • Blockchain Middlewares and Modules
  • Web Assembly Virtual Machines / WASM
  • Data Commons / Marketplaces / Exchanges
  • Governance structures, rewards, penaltys
  • Distribution mechanisms
  • Tokenization of All Assets

Substrate leverages libp2p as the peer to peer network protocol

IBC – Cosmos, Tendermint and Polkadot, Substrate

Interoperability is a key component for developing the next generation of blockchain and cryptographic networks by having traversable and configurable chains that have metadata that is mutable but data that is mutable. Metadata can control governance structures, consensus algothirms, modifications for different type of blockchain transaction mechanisms and other types of modules.

IPFS

I think that decentralized filesystem build with the hash links and content addressing is the future of the internet.

I am really excited to build with IPFS and combine it with other p2p protocols. The shift towards decentralized p2p technologies such as bitcoin and ipfs are part of a shift towards individuals have full ownership, responsibility and control of their valuable digital assets. A digital asset can be a coin; but also personal data, or the metadata about what we do. These sorts of networks are shifting the pendulum back towards personal computing that can still be networked and used however not through a custodian service where we are told to fetch that data from.

The concept of What vs Where a piece of data is:

  • Tamper proof/evident
  • Permanent
  • Content Addressed
  • Immutable
  • Ever Present
  • Traversable
  • Hash Agnostic
  • Interoperable

It is at the crux, a powerful concept.

SNARKS and Privacy Compute

Snarks and privacy compute are going to enable to next generation of scalability and privacy for blockchain networks. The verification of data using a PCP structure is effectively the way that blockchains will able to take a message pass it into the root of a merkle tree, hash that, and have that hash verified by the reveiver of the message, therefore not having the exact message but being able to verification the output of the computation. SNARKS are going to have to as computation improves but there is the ability to apply this cryptography for blockchain network scalability across multiple chains. Para-chain implementations will create an interoperable network of libp2p type networks where content addressed hashing is bring used to develop different paratypes.

B2B Enterprise Software

The next generation of b2b enterprise computing has arrived and is based on a substrate where companies are operating in unison with verifiable, replicated processes and data to yield top line growth. There will be a collective oneness for all businesses where data is owned by the customer, processes are verifiable and distributed amongst a network of peers and nodes in the network at scale. This is the natural progression of computing where the enterprise now can ensure the integrity of their data and replicate the processes amongst multiple partners in a vertical. Nodes on the network can instantly communicate with other nodes on the network in real-time with shared sets of entry’s into each’s own transaction ledgers / vault. All of the business information that is called on by Smart Contracts will be the same across every node in the network and run in a trusted executable environment to create a secure encrypted ledger.

Genomics

In September and October I did a Health and Wellness Genome Test and was it was very interesting to see the results regarding complementary food and specific workout routines that are specific to your allele types. I started getting interested in different applications of blockchain technology to genetics and at Eth SF Hackathon build Genome CDP:

GenomeCDP was my first attempt to learn more about MakerDAO and the Dai Stablecoin. The idea was that you could collateralized your tokenized SNPS in exchange for a loan from the MakerDAO. The idea being that once tokenized as a Non-Fungible Asset in a Multi-collateral Debt Position to Genomic Data would remain relatively stable in price and or could be basketed with others as part of a study. Because this has a price tag on it by big pharma, researchers; it could be collateralized. If the data was bought or sold they could pay off the DAI Loan, Wipe the CDP and use the data. Ideally a secondary layer would be some immutable chain of usage of the data that would lead to a residual or payment to owner of the genomic data if used to solve some disease or beneficiary down the road.

When speaking with others about the project at the hackathon, some thought the idea was somewhat dystopian in that if you couldnt pay back the loan the someone liquidates your NFT and your genomic info is now theirs. Others thought on a more altruistic and open way that the data should be written to a chain but completely open for research as a pool but selective disclosed personal data obfuscated leveraging Zero-Knowledge Proofs.

Blockchain Week SF / CESC 18

I went to CESC 17 in Berkeley last year and this year attended Blockchain Week and CESC 18 in SF. It was much larger this year going from College Auditorium room to grand ballroom but the content just as great if not better.

Notes from the week kind of unstructured:

  • Plasma
  • Casper Beason Set
  • On-Chain Governance
  • Proof-of-Stake 99% / 1% POW for SNARKS/STARKs
  • Oracle Problem and Group Computation
  • Channels vs. Plasma
  • Channels on top of Plasma
  • CDPS, MDPS, Stablecoins
  • Mechanism Design / Crypto Game Design
  • Secure Enclaves
  • Merkalised Data Structures
  • Separation of Compute, Storage, Consensus
  • Middlewares and Wallets
  • WASM Web Assembly Virtual Machines
  • Machines Owning and Transferring Value
  • Reputation Systems
  • Avalanche Consensus
  • Sparse Merkle Trees
  • Sharding
  • Inter-Committee Routing

Notes from Web3 in Berlin

  • Money from first principles
  • Verifiable links, trust
  • the commons, data markets
  • web3 governance and standards
  • mixing layer 2 with substrate
  • cosmos and IBC

Substrate

Learning more and more about the Polkadot network and substrate. Need to continue to deep dive on this protocol and learn how to write runtime modules in Rust to create interoperable application specific parachains.

  • Authors
  • Chains
  • States
  • Extrinsics
  • Intrinsics
  • Inherents

Internet = Communications

Bitcoin = Money

Ethereum = Computation

Substrate = Interoperability

Some thoughts going into 2019:

  • Secure Enclaves, Replicated State Machines and state of the Art BFT will disrupt Cloud Computing
  • Privacy Computation using zero knowledge proofs (ZKPs) will drive the adoption and growth of these networks.
  • Tokenized Data Markets and Tokenzied security liquidity
  • Devops confluence will drive scalable blockchain network adoption
  • Universal Timekeeping Mechanism via Blockchains
  • Blockchain Networks need Safety, Liveness, and Resilience
  • Replications and Partitioned agreed upon state
  • Replica resiliency leads to increase in latency
  • Need to deep dive on running Lightning Node and Channels
  • Tezos Baking Delegate planning
  • Elastico, OmniLedger, RapidChain

The message has to be very concrete and tangible:

what does it do

what problem does it solve

how does it improve x

what is the main goal

why is that the goal

condensed in general to:

who or what brought it about

what it’s made of

what shape it has

what it’s for


Definiteness of purpose, the knowledge of what one wants, and a burning desire to possess it. I will continue to build the mastermind group and devote my energy to drive combinatorial creativity through technology and people.

Happy New Year.

Live the Dream.

-Dom

Interoperability Protocols

The next-generation of interoperable blockchain frameworks are being built upon:

  • UTXO Transaction Models / State Transitions
  • Proof-of-Authority (POA) / Proof-of-Stake(POS)
  • Aura / Avalanche / Next-gen BFT Consensus Mechanisms
  • SNARGS / Privacy Computation / Zero Knowledge Proofs
  • Formally Verifiable Contracts and Agents
  • Committees / Validator Sets / Routings / Attestations / Digital Jury
  • Sharded sets of compute / storage / consensus
  • TEEs / Secure Enclaves / encrypted ledgers
  • Beacon Chains / Randomness / threshold relays
  • IBC / IPFS Mulithash / Multiprotocol / Extrinisics / Intrinsics
  • Blockchain Middlewares and Modules
  • Web Assembly Virtual Machines / WASM
  • Data Commons / Marketplaces / Exchanges
  • Governance structures, rewards, penalty’s
  • Distribution mechanisms

There is an underlying meta-surface or substrate that is being built that not only acts as a way of inter blockchain communication but provides the ability to stand up ones own network. There will be a network of interoperable global blockchain networks that are transferring value leveraging a number of methods to cryptographically check the validity of transfers using smart contract mechanisms powering the convergence of money, goverance and technology. There is a protocol for reference for multiple blockchain systems that leverage Bitcoin and Ethereum like transaction order and verification mechanisms.

There is a confluence of factors which will lead to development of multiple blockchain protocols that are local to specific regions but that interact with the substrate layers. These sections / zones will have their own form and order of transactions that are not part of the global utxo set. The utxo sets will be using the same replicated state machine set of nodes with different validator sets of utxos. There will be an ibc of blockchain nodes that are used for consumer and business applications. There will need to be the use of multihash format for a set of transactions that are traversable between the nodes in the network layer. The substrate layer is ideal for creating ones parachain that is referential to other headers and the agreed upon state of that blockchain transactions.

A traversable and interoperable internet of decentralized blockchain networks leads to:

  • Verifiable transactions that are irreversible, traversable and able to be trusted as the agreed upon and signed part of the global set of state. There is basically this new form of innovation that is fundamental to this next version of the internet in signed verifiable transactions analogous to the hyperlink.
  • Being able to transfer value, assets and tokenized cryptographic application network assets that are publicly verifiable and owned leveraging public private key infrastructure. This will lead to a global shift in the wealth creation and digital asset class management and the ways that we trade new forms of money that have been built from 0 with first principles in mind on these breakthrough networks.
  • True censorship resistant, ownership of the digital assets and data we create. The ownership of data that is again verifiable and immutable and completely disinterested in anything but protocol / consensus rules is foundational to the shift that we are seeing.
  • Applications of privacy computation or snargs, snarks and zero knowledge computation that is ultimately much more computational heavy but enables a digest of data to be verified without disclosure of data. This has implications for every blockchain system but in addition the timing of is advent is with the ongoing improvement in the cost of computational resources which we can anticipate will continue to improve over time and increase it its speed / usability over time.
  • stored smart contract execution mechanisms that can programmatically transfer value whether it be between people or machines. This is the first time that value has become packets and the incentive structures of people or systems of computation have an understanding and can act in a way that is beneficial for the system in addition to detrimental for the participant if they choose to cheat. These cryptoeconomic breakthroughs will continue to be the foundation for the governance mechanisms the rules that we create, the authorities that we agree upon and the systems that we build.

What is happening is profound in that at the same time the security model of the incumbents is unraveling, these new tools are emerging as the response. The protocols that are developing today will ultimately set the stage for the future of computing and society over the next decade and similarly to the last revolution a few are in the position to be able to decide how this technology is distributed and how it develops over time in its governance, scale and ethos. One of the most important primitives of all of this is that these systems provide cryptographic state level power in the hands of the key holder to verify against a permissionless system that is a set of networks and its constitutes. We are just beginning to see the affect the cryptoeconomics and web3 will have on the world.

For more reading check out:

Parity Labs: Polkadot Substrate

Tendermint: Cosmos

Interledger

 

Corda: Enterprise Business Networks

Corda is a global network of nodes for enterprise-grade, real-time transactions and asset workflows. It is a platform for developing the next-generation of enterprise software where state and processes can be shared and uniform across multiple trusted entities leading to operational efficiency, speed and better customer experiences.

Last week I attended CordaCon in London and was very impressed with the work that has been happening in the Corda ecosystem. Networks are being developed very quickly in a number of industries such as financial services, insurance, trade and many others.

My takeaway from the conference was that this protocol is being built from the ground up with a long term enterprise future state in mind. This is being done at a few different levels:

Business Network Governance (BNG)

Business Network Designers (BND)

Business Network Operators (BNO)

The enterprise software industry will be rewritten on distributed ledger technology platforms like Corda. This is not a matter of if but when. There will be a CorDapp for every single existing cloud or on-premise software service company that exist today.

This means in time there will be an ecosystem of developers and entrepreneurs working as business network designers (BND) of white label products or working on behalf of an existing company to get their particular application on the Corda network. The analogy is the xbox, xbox live and video games. Essentially Corda nodes are the xbox, xbox live is the corda network and video games are CorDapps. Each company downloads the cordapp to their node and can interact with other companies in the network.

This ultimately leads to an ecosystem of cordapps built by partners and produced on the Corda Marketplace. There will be soon be thousands of CorDapps available for use by companies of all sizes. This is where it could get very interesting when you have essentially a global network of nodes where any single company could be reachable for shared business process and transfer of value, not just communication. Today processes may be automated or antiquated, not just at internal companies but across partners, best process can be shared by a portal that exposes underlying internal configured application or data.

What does the Business-to-Business enterprise space look like when there is a real-time network for shared processes, state and transfer of value. Globally reachable like a next gen Bloomberg but not just for trades and messaging but for any front-office, middle-office or back-office application. The future state is that there is much more secure, automated and deterministic business process based; assets and value flow freely across the business network and are interoperable with other sector specific business networks.

At a lower level in the stack there is the opportunity for companies to become BNOs, developing, building and maintaining business networks. This is the secure and scalable infrastructure requirements of running the required node in a given network. The BNO is responsible for provisioning nodes and actually running the node whether in a secure cloud infrastructure or on-premise at the customers location. This is also synergistic with the DevOps movement of dockerized applcations, containers, container orchestration, kubernetes, package management via helm, CI and CD for networks and applications. One of the large challenges today with these types of networks is upgrading and coordination updates across multiple peers for different versions of the applications being used across the network. Leveraging the DevOps stack of tools to automating pipelines and scheduling of updates across multiple nodes solves for the challenge of securely updating the threads of the network.

Business Network Operators today have superpowers at their fingertips to run massive enterprise-grade business networks as an infrastructure provider in addition to the opportunity to develop application specific software to manage the networking and delivery of enterprise blockchain networks.

The first time I looked into Corda my initial thoughts were this was a B2B messaging protocol. Atomic transactions in the form of UTXO’s between nodes in the network; now generalized with any form of mutable state and a history of shared consumed states between nodes. The second time I deep dived into Corda a month or so later I thought of it as a microservices network of application specific nodes, the example being the Thomson Reuters nodes provides an oracle for specific financial data and it is reachable on the netowrk. This leads to the question of what type of Cordapp would company X have etc. After this past week I think that Corda represents a shift towards the future of decentralized and distributed computing in the enterprise. I believe it to be the foundation for the next generation of companies building enterprise software companies.

Enterprise Blockchain Computing is an entire new industry consisting of digital transaction ledgers. Tokenized Applications Networks and Tokenized Assets that can be split, evolved, shared, burned, verified; mutated in any way, effectively giving companies the tools to program money. The financial services applications will be immediate but I believe this technology will reach far into other existing enterprise software domains. There is a chain of signatures that enable companies to have verifiable business processes that are direct, final and not broadcasted to every node in the network. This will augment every existing agreement and contract process across every industry. Create a txn, sign txn, counterparty inspect and verify tx, signs and commit it, then party verifies and commit transactions signatures; representative of application specific flows between companies, this will displace siloed systems of record. Replicated state machines using consensus algos such as Tendermint and Avalanche BFT will drive the notary consensus within networks of entities sharing process, verifiable state and transactions. This System of Intelligence can be operated on called upon by agents that are driving business outcomes by activating the signature workflows across a global network. Every company is in reach, are using the same stacks of enterprise-grade software, and are operating at a much more efficient and effective manner. There will be coverage across networks for tokenized assets and the ability to traverse other tokenized merkle trees as well perhaps leveraging multihash and ipfs as a network layer. There will be a future of global digital mutuals and distributed general ledgers as proposed by the Cordite project. Essentially any generalized mutable object will be able to be sent across the network for no per message fee, to any organization in the world. An abitrary mix of assets and a few catalyst for onboarding the nodes in the network will be needed and over time will emerge. Business Network Operating Companies will develop the next generation of products for server and blockchain network provisioning, configuration and management targeted towards enterprise customers. This in turn will lead to the proliferation of industry and sector specific networks as seen with the Marco Polo Trade Finance Network and B3i’s Insurance network.

In time I believe that we will see a high performance global business network of nodes, a global collective transaction and state layer for all businesses. Applications for front, middle and back office; sales and service,  signatures, messaging, general ledgers and accounting, buy and sell side contract automation and straight through processing. It will consist of states, schemas, contracts, and flows; prolog for logic programming to make the contracts adaptive and a search engine across a global business network nodes transacting value and enterprise resources.