Multi-Protocol Enterprise Blockchain Computing

The Enterprise Blockchain Computing market is growing rapidly with Fortune 500 companies forming their internal teams to develop blockchain applications that create separation in their respective industries. Yes, there is a difference between protocols, that is clear. Being directly worked or incentivized into a specific protocol or vendor is not advantageous to the enterprise customer. There are certain requirements that are needed in order for the Enterprise to be able to leverage Blockchain and achieve maximum value from the technology.

Shared transactional processes and shared state is a must.

We are past provisioning and have entered the era of discovery of known nodes that are giving authority to read and write from the ledger. The concept of bringing a consortium is a methodology to streamline authentication and discovery. There needs to be the use of a DID used to authenticate, create, sign and record a transaction, this is important. Workflow and shared process is separate from the use of a DID at the Enterprise level.

The current paradigm: there is POC, PILOT with partner company X, who will provide this set of data to this ledger that we share and that removes the need for us to manually reconcile the process therefore giving us the operational efficiencies needed to reduce our cost and increase our top line. This is what we are hearing from many enterprise customers in US, Europe and Asia. However, this is not a direct representation of the reality of enterprise blockchain solutions today. Where are the production applications that are being used for:  uses of provenance, connectivity, immutability, transactional efficacy.  There needs to be a discovery mechanism for application specific nodes that enterprises can offer and share. A network of enterprise nodes that leverage a B2B messaging protocol, shared process, shared state, transactional finality, process upgradeability, scalability, partitioning; interfaces that are tightly integrated into growth critical data assets and third party applications.

There needs to be a shared microservices approach to building these applications in that they can trust that the process and therefore result they are using is shared and therefore there is consensus as to the process.

These processes need to be upgradeable. The immutable nature of smart contracts in the contract oriented programming abstraction today does not provider this without having another protocol layer on top of the core virtual machine. A kernel that is upgradeable or a set of libraries that are inserted into the contract on original deployment that enables versioning or referencing for later use.

Reserving state is currently a public good and is free other than the one time cost of deployment. This will need to change or the ability to arbitrarily choose that which is trash on the chain and can be removed over time due to inactivity, lack of incentive or payment and need to remain in the world state. Enterprise Blockchain Networks or Business Networks withing the realm of Permissioned known access networks are essentially Intranet2.0.  There are innovations happening on closed and open blockchains. Permissionless innovations undoubtedly are driving the future of this technology and doing the things that were never before possible with blockchain technology. Non-repudiation on permissioned networks using virtualization to share a federated data set is one application for the enterprise that is industry and sector agnostic though requires customization and services for each individual application.

Blockchain networks are enabling the permissionless tokenization of anything. Anything can be tokenized and a market could be created from any one entry point on the network. Initial Coin Offerings are the Initial phase of tokenization for Capital formation; the tokenization of assets that are formally unowned or unapproved is interesting. Is there a way to provide an abstration layer to approve Tokenization of a digital or real world asset. How does one tie tokenization of a real world asset to the blockchain digital world state. Are virtual inventories usable in the physical world, is the overlay needed?

There is the need to build enterprise blockchain applications that enable shared single view of transactional processes to augment existing siloed systems. Opting into business networks that are completely decentralized where assets are frequently moving between participants in the market is the way that tokenization and assets should be moved onto ledgers. Where does that data need to come from to execute computer logic that is shared and agreed upon. The Oracle problem is a multi-protocol problem. It does not matter if you are using Ethereum, Hyperledger, Corda; there needs to be a trusted verification mechanism for the data that is being used to auto-execute a transaction flow or set of state changes.

Enterprise Smart Contracts consist of transaction types, transaction flows, that are distributed on a log and executed on every node. This is true of Smart Contracts on Ethereum that are imprinted into the log that every node shared and true of the Chaincode that is instantiated on Hyperledger Nodes in the network. There is a set of functions that cause state changes within a group of peers whether it is thousands or it is 4 enterprise customers. The state is agreed upon and the individuals or the companies can trust that they are looking at the same state that has been derived from the shared agreed upon processes.

At it’s core Blockchain technologies are ledgers yes, yet ledgers with stored procedures, yet what is new is distributing verifiable value and state. Creating Business Networks and Virtualization of Data across multiple protocols at the application layer provides a better and faster way of doing business. Virtualizing value and distributing it amongst a variable set of unknown trustless peers is new.

Is transactional efficiency the most important aspect of Blockchain technologies? Has it created the separation and arbitrage. Is it as fast to send a payment and verify a transaction as it is to get the physical good there. Is there an opportunity to optimize and recalibrate the efficacy of the payment  and transfer of goods. Which one is driving the speed of the other. If the transactional efficiencies create enhanced visibility or what part of it needs to be sped up?

Docker Containers are the core tenants of blockchain nodes. Each company transports a Docker Container as a transferable set of its transaction history and its processes. Where a Docker Container lives is unimportant as of now but is a question that must be answered in time. Where and how a docker container is transported, instantiated. Where do the images of state live, are these images the immutable logs of verticalized enterprise operations in the future.

Automation and provisioning of nodes is the first step using step functions and CloudFormation Templates or Terraform is the ability to automate the first step of the process for the enterprise.  Though companies, developers, system integrators are figuring out that speed to provision the network does not lead to an immediate value add for the enterprise customer. Automation of resources via API is a superpower we were gifted with yet; it is powerful to be able to have unlimited computational resources that you can setup with a JSON template. A JSON template to define a set of conditions and the requirements for the network, the API gateway, the containers, the subnets.

Though, this power unfortunately does not lead to achievable Key Performance Indicators for the Enterprise Customer. It leads to less downtime for hosted transaction services; but there are different requirements for the enterprise customer. The Enterprise Customer needs to define and discover what are the KPIS for a blockchain network. What were the KPIs for the first intranets? Public blockchain networks don’t need KPIs, the price of the coin is the KPI.

Multiple Blockchain protocols are being used in the Enterprise and Enterprise customers need to be able to understand the advantageous and disadvantageous of using certain protocols. The next step is actually getting the data into the protocol. The third step is actually implementing an agreed upon process or the agent that is shared between the companies. Then determine a way to value the network by setting performance indicators as to whether or not the investment in the network is returning dividends for all participants. The most important part of the above is the data. The data that is used in the Enterprise on shared networks is of utmost importance. Why is this?

Shared data and real time data that is trusted data can be used across other technologies. Is there something to this? The fact that Blockchains can bifurcate is an important lesson in that bifurciation is a failure in consensus. Until that failure of consensus, there is agreement to state.

Enterprise agreement on state leading to verticalization may be the killer application at least for intranet2.0. Blockchains have certain properties and attributes that are secured by a proof of X. These attributes and properties can be used as a solution to a dare or a questions. Logically centralized and organizationally decentralized databases that are secured by a proof provides transactional efficiencies not data efficiencies. Immutable persistence for transactions is key as is the petrified effect that blockchains provide as an attribute. The enterprise will adopt blockchain protocols and applications that provide a scalable and secure mechanism for using attributes that provide efficiencies by proof. These systems create separation in the respective industry.

For public networks tokenization speaking from radical truth is the killer application. Decentralized organization through incentives creates virtual camps that can self organize. Enterprise organization still a need chief conductor for the orchestra. The microservices coordinator is important to improve, coordinate, automate and actually actualize that value of the network.

Enterprise Blockchain Networks will take time though it is no longer early. We are entering the node (org) discovery era and this will be done in different ways; my bet would be on using a platform they are already on and providing a way for them to discover who else is also already there.

 

 

 

 

 

 

 

Consensus 2018 Hackathon

 

The consensus from the 2018 hackathon was:

  • there are a number of emerging blockchain technology platforms
  • that are focused not on the protocol level but on creating additions to the base protocols
  • these are workbenches, platform tooling, application environments

There was a number of different hacks that were built over the weekend focusing on supply chain and food with GrowNYC. The stack used by most teams was Hyperleger and Chainpoint.

There was also the emergence of protocols such as Interbit and Nebulas.

Trading infrastructure and technology provided by Alpha Point, there were a few Ethereum dapps focused on ERC721 token standard but not as many as would have expected.

This is the fourth year I have attended this Hackathon. Two years ago it was in the same room at the Microsoft Center and I pitched MicroSaas, an application I built using BlockApps on Azure. Last Year at Deloitte the Hacakthon project I used the IBM Blockchain stack (Fabric and Composer). This year I worked on Hyperledger Fabric, Hyperledger Composer and Chainpoint to build out the growNYC challenge. I embedded UUIDs from Chainpoint into Hyperledger Transactions and then surfaced these in Salesforce as an external object. This combined this security of the public bitcoin blockchain by providing an immutable proof and hash of the data that is mapped to a business network of known participants and assets. I linked specifically a UUID, Proof with a Hyperledger transaction and eventId.

Hyperledger Transaction JSON à

Instant

Hash →                      .submitHashes()

←- UUID                     /hashes

~ 15 Seconds

UUID →                      .getProofs()

← CAL Proof              /proofs

~ 90 Minutes

UUID →                      .getProofs()

← BTC                        Proof

Instant

Proof →                       .verifyProofs()

← Verify                      /verify

I believe that hashes of telematics can be embedded into the Bitcoin blockchain and in response a proof could be used to validate the transactional data at the time of creation. There is an interesting concept. However it will always be what is the trusted data source in that sense even though the Blockchain is immutable and in a sense secure by both the Bitcoin and Hyperledger hashes, the data in that is used in the hash will need to be trusted. This is the crux is that the data cannot be / is impossible to change later on. It is petrified, as is the hash of the data, as is the proof, and the link to a specific transaction Id.

I believe that there was not as much of a focus on Ethereum app development as there has been in past years. This could have also been because Ethereum developers were at the Ethereal Summit over the weekend. It seems as if the hackathon has moved away from the decentralization movement of using technology combinations of Ethereum and IPFS. It could also be that there were prize and incentives more aligned with supplychain and permissioned blockchain network.

In seeing this evolve over the past few years I have some thoughts.

The first year everyone, was building on Bitcoin there was maybe 15 teams an two of the teams built on Ethereum back then. The year after the majority of the projects focused on Ethereum development, last year there was a good balance of Ethereum and Hyperledger and this year it was mostly Hyperledger.

I think reasoning about the blockchain use cases as participants, assets, and transactions is a much different view and style towards blockchain development that contract oriented programming on the EVM.

It is a much different set of tooling and workflows as well.

One of the interesting things that was launched was the Azure Blockchain Workbench. Though I have not had a chance to work with it yet, it seems to be a way that Microsoft is moving up from just providing the nodes as a services for various protocols.

I was surprised there was not as much focus on new types of ERC standards. There is a ton of innovation happening right now with the Token standards and the Ethereum network. My overall sentiment is that Ethereum has captured the ground swell across the board for development and is creating things that have truly never been able to be done before on the web.

A part of me though I like using the Hyperledger stack to model out the business networks finds myself trying to rationalize things like being able to upsert existing records that have been written to the chain and build smart contracts when it is really not a contract oriented network.

Ethereum has been created out of this need and digitally enforceable agreements are now the foundation of the most innovative, cutting edge, and well funded projects in the world. Some other notes were platforms like EOS, Lisk, Cardano were not actively used at the Hackathon.

I think that the ease of use Hyperledger Composer for the modeling language schema and the use of javascript for transactions has served as a solid toolkit especially when used with events and websockets. I want to see more decentralize application development happening on platforms that are cutting edge an are enabling things that were never possible before.

The hackathon developer versus the protocol developer is also a very distinct difference.

In that there are engineers working to make these networks scale and work in the future while there are application developers hoping that they figure it out soon.One project I wish that I had worked on over the weekend was Quorum. I want to get a better understanding of how to setup Quorum nodes an integrate them into Salesforce.

The next two protocols that I am going to develop applications on will be Quorum and R3 Corda.

The emergence of multiple protocols that enable decentralized application development has accelerated. Regardless of the protocol, there still will nee to be core programming of the contract logic, transaction logic, figuring out things such as certificate and key management, data persistence, authorization.

I think the most innovative project was tinder for cryptokitties, swipe left or right for the cryptokitties, and you can chat to bid and negotiate.

Until next year.

 

 

Means of State Replication

The means of developing replicated state machines.

Decentralized and permissionless state replication as well as distributed application specific state replication are defining the next era of computing. Replication of data on distributed systems using leaders and workers, distributed micro-services and containers, peer-to-peer blockchain protocols, UTXOs, and digitally enforceable agreements and agents. Blockchain is the next generation database architecture for trust minimized computing.

A database needs to do two things:

When you give it some data, it should store the data, and when you ask it again later (via a query language); it should give the data back to you.

Data Models: the Format in which you give the database your data.

  • SQL
  • key-value
  • NoSQL
  • Graph

Query Language: The mechanism by which you can ask for it again later.

How does network replicate it’s state so that:

  • the graph grows
  • it determines validity
  • it determines order, if needed
  • the history is shared
  • economically incentivized
  • logic is shareable, upgradeable

Trust is the key feature of blockchain protocols;

Because of this the protocol can mutate and replicate and incentivize. Is the process of mutation ever seemingly finished, code is not / never static. Why is this so accelerated? We are observing mutations on a much faster timeline and scale then we could imagine. The growth and replication of the networks are:

Replication of state is the means of which a blockchain can produce a copy of itself, with it’s governance, it’s liveness. A copy does not need to be produced in order to write. A net new copy needs to be produced in order for a mutation to take place. Forking a blockchain is by nature is change for:

Linked blocks and linked transactions are what a blockchain is comprised of, state is replicated and address space is shared.

Protocols change and improve. Data is linked. Means of replications to reproduce a series of state. Are replicated state machines necessary in order to achieve consensus. Why is there a need for the control back. What is it that is being unveiled and that we are converging over.

Why do we trade storage space for address state?

The distance between the space and the state changes? What is the difference between on-chain state and cold key-values static, dormant. One replicates state in order to achieve consensus.

Stateful databases have a persistence Layer. Does it matter what type of database is used to determine if it is a blockchain?

Lisk: mySQL, Corda: H2 vs. Key-Value ( Parity: Rocks DB,  Geth: Level DB)

To store and replicate state, blockchains have to have a persistence layer. The consensus mechanism and state are separated with blockchains.

If state isn’t replicated across all nodes, is it still considered to be a blockchain

  • OLTP – optimized for transaction processing
  • OLAP – optimized for analytics
  • OLTP
    • Log-structured – only permits appending to files and deleting obsolete files
    • But never updates a file that has been written.
      • Bitcask, SSTables, LSM-Trees, LevelDB, Cassandra, Hbase, Lucene
  • Update-in-place – treats the disk as a set of fixes-size pages that can be overwritten.
    • B-Trees
  • Data warehouses
    • Becomes important to encode data very compactly
      • To minimize the amount of data that the query needs
  • Column-oriented storage helps achieve this goal

Consensus

What types of decentralized transaction ledgers have the security guarantees of open replicated systems state across every node?

Means of transaction:

  • UTXOs
  • Accounts

Means of replication:

  • DAGS (Directed Acrylic Graphs)
  • Merkle Trees
  • CRDTs (Conflict-Free Replicated Datatypes)

Means of consensus; the consensus algorithm.

  • Asynchronous Byzantine Fault Tolerant
  • Byzantine Fault Tolerant (BFT)
  • Fault Tolerance (Raft)
  • Practical Byzantine Fault Tolerance (PBFT)
  • Proof-of-Work
  • Proof-of-Stake
  • Delegated Proof-of-Stake (dPOS)

avalanche

Data structures are replicated as state machines that are crypto-economic incentivized. Is there a more secure way to create that replicated state machines with no native token?

Confidence score creates, a determinism for state shared on a Merke DAG. If below certain value that the state flips. RSMs (Replicated state machines) are traversable, many-to-many state structures. Unconsumed state transactions that are replicated on isolated merkle dags. avalance DAGS are live replicated state graphs.

Distributed, Permissioned, and Application Specific

We can think of the ledger from each node’s point of view as the set of all the current (i.e. non-historic) states that it is aware of. State can be a shared or unshared. State can be an agreed upon fact. The fact is shared, the state is shared. Historically, a record of agreed about facts can be determined as agreed upon historically shared state. “This sequence of state replacements gives us a full view of the evolution of the shared fact over time.” UTXOs that are linked as historically state/facts.

Validity

Is a transaction valid or invalid. Is a transaction that is invalid need to be included in the ledger.

Atomicity

“a transaction is just a proposal to update the ledger. It represents the future state of the ledger that is desired by the transaction builder(s):”

 

  • The transaction’s inputs are marked as historic, and cannot be used in any future transactions
  • The transaction’s outputs become part of the current state of the ledger

 

“Outputs need to be valid to produce new inputs; to create liveness, layer historical state. Each transaction consumes a set of existing states, to produce a set of new states.”

Order

Is order of importance? Is the order of valid transactional state changes a prerequisite to achieve consensus. Is who owns the fact, the state to be shared of importance? Gossip Graphs; is it of importance who told the fact, where the fact came from?

 

Access

Doorman / Certificate Authority. How do we let you into the network.  How do we get you up to speed on what has happened, or do we not need to for you to join?

Logic across data structures;

The scope is about being able to program logic that is replicated across permissionless state machines.

2018

There is a shift that is happening; it is rooted in the rapid expansion of decentralized cryptographic protocols. Public private key pairs are becoming common use for securing/storing digital assets, providing proof, determining truth. Where our data lives, who owns the data, and who can that change data are all questions that are being asked. Collectively we have a distributed consciousness in the internet; there is a small gap from when we wake up to when we check state. This could be the glance of a cryptocurrency price, notifications, categorizing messages; it is our terminal into a digital world that is layered and distributed. We tap into the information layer, we trade intangible digital value waves that we are collectively starting to operate on constantly. The technology is converging and accelerating; there is an interesting shift happening and decentralized technologies are what is driving it.

This past year I learned:

  • How to build Ethereum Blockchain Applications
  • How to build Hyperledger Blockchain Applications
  • How to develop enterprise managed packages on the Salesforce Platform

This is going into the 6th year I have written on my blog. I am at just over 70,000 views all time.

2017-12-26 09_29_08-Stats ‹ domsteil — WordPress.com

Hit a little under 28,000 views this year in total; a 250% increase from last year. This was driven mostly by cryptocurrency and ICOs becoming mainstream and Blockchain becoming top of mind in the enterprise throughout the latter half of the year.

2017-12-31 09_29_10-Stats ‹ domsteil — WordPress.com

 

2017-12-26 09_40_16-Stats ‹ domsteil — WordPress.com

Here are the past couple years of this same blog posts.

2013: A Year in Review

2014

2015

2016:

2017

These were some of my goals from 2017

Focus in 2017

  • Build Dapps
  • Build with Babel, ReactJS, Redux, GraphQL, and Node
  • Keep building Bots and a VR/AR interface library in Unity
  • Learn about Containers, Docker, Kubernetes
  • Learn how to write python scripts with Tensorflow, Theano and Keras libraries
  • Write 21.co computer python scripts
  • Write 1000 Ethereum Smart Contract Clauses
  • Read Applied Cryptography cover to cover with annotations
  • Keep a written Daily Journal and Physical Calendar
  • Say my autosuggestion every morning
  • Drink more water, run, and read everyday (52 books, 365 miles)
  • Get 100K views on the site
  • More water every day
  • Eat Healthy: Chicken, Vegetables, Almonds, Salmon, Brown Rice, Eggs, Protein Powder, Oatmeal

Some of the highlights from 2017:

  • Started a blockchain company
  • Moved to Barcelona
  • New York for the third time for the Consensus 2017 Hackathon and won the awards from IBM Cloud and Wanxiang
  • Went to China for the first time for a Blockchain Conference
  • Presented on Blockchain at Dreamforce
  • Continued to invest in ICOs and trade cryptocurrencies

build, keep, learn, write, read.

The focus in 2018 will be working on the foundational side of a great organization for many years to come. The focus will be on using technology to build great products with a great team.

Focus in 2018

  • Build Dapps
  • Build with ReasonML and ReactJS
  • Work with 200 Customers to start building Dapps
  • Work with people passionate about Crypto/Blockchain + Salesforce
  • Build a great team
  • Go to China First Program
  • Learn Mandarin Every Day on my phone and read the phrase book
  • Run 3 Miles and lift every morning
  • Build something incredible with IPFS
  • Build Dapps NLP Engine from scratch with Prolog
  • Build Lisk Apps and deploy from Salesforce
  • Keep a daily journal and a physical calendar
  • Say my autosuggestion every morning
  • Run, and read everyday
  • Get 150K views on the site
  • Eat Healthy: Chicken, Vegetables, Almonds, Salmon, Brown Rice, Egg Whites, Protein Powder, Tuna, Oatmeal, Almond Milk
  • Drink more water

This year I spent a lot of my time traveling and with that I read and reread some great books:

  • Advanced Apex Programming
  • Building Microservices
  • Designing Data-Intensive Applications
  • Force.com Enterprise Architecture
  • Traction
  • Emotional Agility
  • Learning React
  • The Hard Thing About Hard things
  • Slicing Pie
  • The Reputation Economy
  • Scar Tissue (on s single flight from London – Seattle)
  • Applied Cryptography with Annotations
  • The Startup Playbook
  • The Social Organism
  • Sam Walton’s Made in America
  • Think and Grow Rich
  • Ponder on This

This year I really got into Numerology. I would start to use numerology for trading cryptocurrencies and a number of other things. I think that it is something that is part of the other side of the coin, the metaphysical stuff we don’t talk about; I find very interesting.

Virtualization and Distributed Microservices

  • Microservices
  • The Art of Capacity Planning
  • ZooKeeper
  • Cryptography infosec
  • Docker Containers

This year I have definitely become more interested in leveraging virtual machines and building using containers. More and more research on decentralized/distributed systems, cryptography, virtualization, microservices, tokenization; these are the technologies leading the shift. Diving into architecture around technologies like Docker Swarm, Kong API Gateway and NGINX, Kafka, ZooKeeper. Learning to use decentralized technologies like Urbit and IPFS. Running and interacting with Bitcoin core. Using Ubuntu VMs to run an Ethereum Node provider or Hyperledger nodes but over all just testing out different software using containers in a virtual machine instance. I think the combination of using git, node, and docker is very powerful for developing web applications. Also learning to use vim was very valuable as well for the VMs. Teams can ssh in, push to shared repos, sync; it is aligned with the open source and distributed nature. I have worked on defining multiple blockchain protocols and how they differ from each other; which ones to focus my time towards and also which ones have the greatest upside long term. The yes or no on whether or not to dive in is always a questions that takes time; I think that one of the technologies that I really went down a rabbit hole with this year was IPFS.

IPFS

I think that decentralized filesystem build with the hash links and content addressing is the future of the internet.

I am really excited to build with IPFS and combine it with other p2p protocols. The shift towards decentralized p2p technologies such as bitcoin and ipfs are part of a shift towards individuals have full ownership, responsibility and control of their valuable digital assets. A digital asset can be a coin; but also personal data, or the metadata about what we do. These sorts of networks are shifting the pendulum back towards personal computing that can still be networked and used however not through a custodian service where we are told to fetch that data from.

The concept of What vs Where a piece of data is:

  • Tamper proof/evident
  • Permanent
  • Content Addressed
  • Immutable
  • Ever Present
  • Traversable
  • Hash Agnostic
  • Interoperable

It is at the crux, a powerful concept.

Mining

I set my miners back up; the price of BTC made it interesting. It didn’t take too long. Just hard to order a few wires from Amazon and they were printing digital gold again. Though not profitable (in dollars at least, it is still creating bitcoins), there is nostalgia from the sound and checking to see how much was created overnight. Personal capital investment in machinery that you own is that creating something is why I like it.

Business

I learned a lot about starting a business over the past year. Everything from startup financing to product development sprints; to equity, vesting, boards, contracts, negotiations. It is all part of it. A few months in I wrote something along the lines of: Starting a company is a mix of assumptions, strategies, emotions, decisions, expectations, risks, costs, disciplines, interests; but above all belief in a vision and a drive fueled by passion and focused persistence. One of the most important things however I believe is timing. When building Dapps.ai this year I spent a lot of time sitting in my studio in Barcelona, drawing out on the back of a physical calendar what an interface could look like, what problem it solved, what technologies it was built on, and how long it would take. I have months and months of these notes and in review, I think timing was the underlying current in and catalyst for everything. At the beginning of the January of 2017, the hedge was that cryptocurrency and blockchain would eventually “go mainstream” and companies would have to adopt a strategy; just when. The different technologies to choose from in time will become self-evident. I wrote down everything I thought would need be of part of or of focus: Bitcoin, Ethereum, ZCash, Solidity, ethercamp, Blockstack, 21.co, Tierion, Lightning, Chain, Corda, Lisk, Hyperledger, Monax, truffle, metamask, web3, zeppelin, next.js, ipfs, swarm, bigchaindb, infura, raiden, string labs, parity, uPort.

(A lot focused on how can one securely prove/sign/send with a pKey when interacting with Ethereum dapps).

“Knowing enough to think you’re right, but not enough to know you’re wrong. What single bit of evidence, what would it take to show you that you’re wrong.” -NDT

Fast forward a year and the overall awareness of the market and industry is second to none, yet still in its infancy on a global scale. With this comes that the realization that it will have to cycle back down (as it is cyclical) and we will have to continue to build upon our learning’s and decide what is the technology that will shape the future. Knowing what technology to focus on and devote time too is probably one of the tougher parts. The other part of this process was how to bring a product to market in a timely manner. Coming up with an MVP, building a package, learning about execution context and limits, putting the package through security review. In addition, it made me think about things like serialization, encryption and decryption, key management for applications, how to price and application and how to market an application. I want to continue to learn how to build on the Salesforce Platform and incorporate many other blockchain technology protocols. Every programming language, technology, platform, syntax; it all builds on each other. Building upon administrative and developer skills on the Salesforce platform and different computer science disciplines in the blockchain space; that is where the core focus is. Event driven applications that can be used with existing customer data, implemented processes and other apps is what drives a lot of the value of the combination; the polyglot persistence.

Crypto

I think looking at the prices of all of these cryptoassets for about 5 years now, you could day trade; but you could also create wallets offline, store the keys in a safe place, send a little of the top 20 coins to each and go and move to some island or forest and don’t touch them.

20170104_151415
coinmarketcap.com on January 4th, 2017

It is a very incredible time to see all of these networks grow in the way they have. The latter half of this year the market growing from 100 billion to over half a trillion is unbelievable.

The groups being formed together to take part of this movement whether on twitter or a chat group is a driving force. In retrospect years from now it will be obvious that decentralized cryptocurrencies, electronic cash, digital tokens would be what shaped the future. I won’t go into any price predictions. I will say that I believe that network function tokens for things like computation/storage, tokens that will enable private transactions, and token driven platforms for decentralized application development; will all continue to become networks that increase in demand and overall value.

2017-12-31 09_11_54-20170906_172535.png ‎- Photos

 

Crypto and Blockchain

There is a polarization distinction between the two sides; crypto and blockchain.

You have crypto drive by token incentivized open projects and you have enterprise blockchain platforms driven by enterprise cash and consortia. I think there is value in both and there will be things that are invented in both that will contribute to the overall development and progression of the technology.

The analogy is like an intranet and the internet. Another good quote was something like you can have an acoustic showing or a rock concert but you can’t have one disguised as the other.

This is a bit the sentiment I feel when hearing questions like what is Blockchain, versus using opensource protocol based digital tokens and their respective blockchains. Similarly developing applications with Ethereum and Hyperledger is too different sides of the brain. One you are thinking in how to sign messages from a browser, gas optimization and price, virtual abstractions that can be tokenized and traded; the other you are thinking in abstractions for business logic, how has access to the distributed infrastructure. One is a singleton that is updated, may become congested, but ultimately is a live and decentralized state machine. Both having a common goal of achieving consensus with the state of a system. Bitcoin on a state of UTXOs, Ethereum on a state of Accounts, Hyperledger on the state of the validity and order of transactions. I think that Proof-of-Work and the largest distribution of public-private key infrastructure to provide a verifiable, censorship resistant informational system; is at its core is the 0 to 1.

  • Decentralization
  • Security
  • Scalability

Again, from all of the above technologies there will continue to be an ecosystem that is growing and ever changing. I started this year using spending time configuring my testrpc, then becoming familiar with web3 and solc npm packages at the command line, and finally understanding how to create and sign Ethereum messages from the browser to deploy Smart Contracts.

Web3, Solc, IPFS

By the end of the year I was able to create an application on the Salesforce AppExchange to do the same; an Ethereum client for the Salesforce platform. Abstract away metalanguage, concepts and create an interface that was dynamic and give companies a way to manage all of the necessary components of these networks. I didn’t focus on or building the killer app with a focus on the existing frameworks. The killer app of blockchains so far has been using and trading digital tokens. The two main networks that’s function have been used are Bitcoin, and Ethereum for creating more tokens. We are just beginning to see dapps other than for the creation of tokens being create ie non-fungible tokens, rarity. Tokenization of digital assets is undoubtedly going to continue to rise. Though we may not be using (most) of these digital tokens in the physical world, in the digital world where we continue to spend more and more of our time these tokens will be used.

Hyperledger

The second half of the year I spent a lot of time learning and building with Hyperledger Fabric and Hyperledger Composer. The main difference is with this technology you are building everything from the ground up. The infrastructure, a blockchain explorer, the business network layer such as participants, assets, transaction logic, the gui to use the netwowrk; it is a different process that spinning up a node app and calling the ropsten testnet. Configuring Dockerfiles spinning up instances, writing the network model; though in JavaScript is mech different then the ideal of deploying Solidity to a singleton and creating a web3 UI to interface with the contract.

Dapps Inc.

I want this company to grow globally; from different geolocations, computer science disciplines and different technologists; and I want to build a company that is built on products, revenue and customers. I am interested in making decentralized technologies like Bitcoin, Ethereum, Hyperledger and IPFS useful for businesses around the world.

This next year at Dapps Inc. my sole focus is on four main things:

  • capital – cryptoasset fund
  • chat – build the village
  • network – nodes and infrastructure
  • solutions – decentralized solutions

I want to continue to learn from a larger community and the great minds in the space.

  • Build the best team of people
  • Build the best products and services
  • Have customers pay us for the products and services.

The ability to use decentralized computing, have processing and decisions made at the edge endpoints of the graph creates the fastest feedback loop. We can now access this edge data and update state to others on a p2p network. With this comes security and network challenges of connecting and processing this next era of computing. The decentralized and distributed era of technology is rapidly becoming reality. I truly enjoy working on this technology. I do believe in the economic incentive mechanisms for peer-to-peer cryptographic protocols and I am all for open source founders that can monetize, and users can become equity owners. That shift is already happening.  It is important to keep building, developing and combining the latest technology, learning every day.

Definiteness of purpose, the knowledge of what one wants, and a burning desire to possess it. I will continue to build the mastermind group and devote energy to drive combinatorial creativity through technology and people.

Happy New Year.

Live the Dream.

-Dom

Thoughts on Bitcoin

Bitcoin is.

The security of bitcoin as a protocol is derived from the quantum properties it uses as a secure informational state machine. The state transitions can be trusted based on pure mathematics.

  • Elliptical Curves
  • Digital Signatures
  • Proof-of-Work

Bitcoin as a system is the most powerful machine / computer in the world.

It has already achieved this in a quick almost 10 years.

What makes it incredibly valuable with respect to the concept of money is that it is fairly easy to write to the the most powerful machine in the world.

State is live.

Transactions that are submitted are picked up by 98% of the hashing power of the machine.

It is a universal singleton.

By definition it is information creation in the purest form.

It is quantum in that a bitcoin address exist in a digital layer thats function is created on a spent output and used on input.

That act of transferring a bitcoin is that which creates the value.

No digital file is transferred.

Bitcoin is simply a namespace for the act of quantum transactional state verification.

It is a wave until that act of spending makes it material. It is untangible yet verdical.

Consensus is secured by gravity and energy.

There is a distribution of energy that is secured by the ability to call and announce state change verification. It’s inputs and outputs on the network.

It is a graph database d = 1.32

Hashlink:

ipfs/QmZHh2KKXTFtdmk5ybV2hmjXZvEpmyXEvj11DnNoguukmF

Interoperable, Scalable Blockchain Protocols

Last week I attended the Wanxiang Global Blockchain Summit in Shanghai. There were a number of sessions but one that really stuck with me was about potential scaling solutions for distributed consensus protocols.

There is a concept of polyglot persistence which essentially means certain databases are meant for certain reasons. You have SQL database, noSQL databases, graph databases, blockchains, distributed fault-tolerant consensus ledgers, CRDTS; all of these different types of protocols. They have different consensus mechanisms, data models, query languages, state change and message ordering models such as validators or orderers; but in order to effectively scale they need to be to interoperate and need to be able to complement each other.

They will need to be able to share account state. They will need to be able to resolve any concurrency issues that arise. They will need to be able to resolve replication and eventually be consistent.

Most of my time is spent working at the application layer but an assumption that I have as that building solutions that will incorporate multiple protocols will be important in the future.

This type of approach is not necessarily found in the blockchain space specifically because of the economic/personal incentives one has if heavily invested in a specific protocol. This could be time learning Solidity versus Kotlin; this could be owning a bunch of Ether vs NEO; this could be working at a company that is heavily contributing to a particular open project.

Regardless of ones preference in blockchain/distributed ledger/consensus database; this past trip to Shanghai validated that taking a multi-blockchain approach and being open to learning the differences between them and how they could also complement each other is going to have a great effect on the way that solutions in the future will be built.

An example of this is the new Ethermint Token.

Ethereum and Tendermint

In what is being called the Shanghai Accord; a new partnership between the two projects was announced in which they will work together to help scale blockchain technology. Tendermint a Proof of Stake consensus protocol and its new Internet of Blockchains Hub / project Cosmos is working closely with Ethereum project to essentially create derivate tokens on Tendermint infrastrucutre in what is being called a hard spoon. Essentially a new coin is minted on another chain (tendermint) that uses the account state or balances of an existing token on a chain (ethereum); and we have Ethermint. This could mean in an effort to provide a scalable cross-blockchain solution in which tokens now have shared state/accountbalances on the Ethereum Chain and the Tendermint Chain.

Content-Addressed Blockchain Resolvers

This is but one approach to have inter-operable blockchains. Other approaches have been proposed historically such as pegged side chains but one to me that seems very interesting is leveraging IPFS at the network the content address and resolve through the different blockchain / databases (as long as there is a key value store) using multihash and the ipld thinwaist protocol. It is still much just concept but again it is aligned with the idea of having multiple chains interoperating to address some of the issues that blockchains are facing today.

Plasma

Another approach is using Plasma; this allows transactions to occur offchain; only broadcasting them to the chain if there is a transaction dispute; you never sent me the money, vice versa). In this way the blockchain becomes an arbiter or a distributed court house; a way to have veridical computing if there is in fact a need for the chain based settlement.

Proof-of-Stake

Another take away  and interesting concept is the movement to proof-of-stake. Ethereum’s movement towards POS is in Vlad Zamfir’s Casper protocol. Other chains such as Lisk uses a dPOS system and Tendermint is on POS as well. A common theme between them is the concept of weight. The stake and the weight of stake has some very interesting implications on the security and threat models that could be introduced to the protocol. More on Proof-of-Stake/Casper in another post.

There have been a few recent cross chain atomic swaps recently aswell.

Litecoin + Decred

https://blog.decred.org/2017/09/20/On-Chain-Atomic-Swaps/

And cross chain transaction component verifications.

ZKSnarks + Ethereum

The Byzantium hard fork went live on Ethereum’s Ropsten testnet at block 1,700,000; one of the features was support for ZKSnarks. On the Ethereum ropsten testnet a a zksnarks component of the Zcash transaction was verified.

Public Chains and Financial Services

Lastly from Nick Szabo’s talk on Smart Contracts the idea of a smart contracts acting a digital distributed vending machine was very interesting. There are certain security guarantees; and the rules are deterministic and known (you put in money into the machine something comes out). I have heard the vending machine analogy from Chain.com as well, the concepts share the same vision of taking the best of public blockchain technology and the best of existing financial services.

 

2017-09-16 01_49_44-Untitled - Notepad

2017-09-20 09_43_59-Dapps.ai Platform - PowerPoint

2017-09-16 02_00_15-Untitled - Notepad

 

Salesforce DX Platform Development

Salesforce DX is a new integrated experience designed for high-performance, end-to-end agile Salesforce development that is both open and flexible.

Before Salesforce DX, all custom objects and object translations were stored in one large Metadata API source file.

Salesforce DX solves this problem by providing a new source shape that breaks down these large source files to make them more digestible and easier to manage with a version control system.

2017-07-06 01_31_57-workspace - Force.com IDE

A Salesforce DX project stores custom objects and custom object translations in intuitive subdirectories. This source structure makes it much easier to find what you want to change or update. Once you have developed and tested your code you can then convert the format of your source so that it’s readable by the Metadata API, package it and deploy  it.

aplx

This is a complete shift from from monolithic, org-based development to modular, artifact-based development while also enabling continuous integration (CI) and continuous delivery (CD). This means development teams can develop separately and build toward a release of the artifact, and not a release of updates to the org.

This will be a working post as I continue to go through the release notes and work with the new platform tools.

Resources

Get Started with Salesforce DX

Salesforce DX Developer Guide

Salesforce CLI Command Reference

Install the Tools:

DX CLI – install here

Force IDE 2 – install using this link and install preequesties Java kit here

Dev Hub (Trial Org) – sign up for a dev hub trial

Create your directory

Open up Git

Cd c:

Mkdir dx

Cd dx

Mkdir projectname

Cd projectname

 

Configuring your DX project

Salesforce DX projects use a particular directory structure for custom objects, custom object translations, Lightning components, and documents. For Salesforce DX projects, all source files have a companion file with the “-meta.xml” extension.

When in the project directory:

sfdx force:project:create -n projectname

ls

you will see a few files.

The project configuration file sfdx-project.json indicates that the directory is a Salesforce DX project.

If the org you are authorizing is on a My Domain subdomain, update your project configuration file (sfdx-project.json)

“sfdcLoginUrl” : “https://somethingcool.my.salesforce.com”

Package directories indicate which directories to target when syncing source to and from the scratch org. Important: Register the namespace with Salesforce and then connect the org with the registered namespace to the Dev Hub org.

Creating your VCS Repo

You can now initiate a git repo and connect it to bitbucket or github.

create bitbucket / github repo

git init
git add .
git commit -am ‘first dx commit’
git push -u origin master

if you go to your repo in the browser you should see the dx json config file and sfdx structure.

cd projectname

Creating a Scratch Org

sfdx force:org:create -s -f config/project-scratch-def.json -a ipfsDXScratch

You don’t need a password but can add one using force:user:password:generate

You can also target a specific dev hub that isnt the default.

sfdx force:org:create --targetdevhubusername jdoe@mydevhub.com --definitionfile my-org-def.json --setalias yet-another-scratch-org

–targerdevhubusername to set a dev org not the default

The Scratch Org

You can have

  • 50 scratch orgs per day per Dev Hub
  • 25 active scratch orgs,
  • They are deleted after 7 days

The scratch org is a source-driven and disposable deployment of Salesforce code and metadata. A scratch org is fully configurable, allowing developers to emulate different Salesforce editions with different features and preferences. And you can share the scratch org configuration file with other team members, so you all have the same basic org in which to do your development.

The project.json file is what makes the project and scratch org configurable. You can now define what type of org, permissions and settings you want to build and test in. It is a way to build and test with certain platform features, application settings and circumstances while also being able to version and have source control.

This also enables continuous integration because you can test against a use case in one of the scratch orgs and see the impact based on the configuration or by importing data from  the source org. Once bulkified and tested it can be pushed back into the source org.

Scratch orgs drive developer productivity and collaboration during the development process, and facilitate automated testing and continuous integration.

Use many scratch org config files – name them . They are the blueprint for the org shape.

Ethereum-scratch-def.json

Production-scratch-def.json

devEdition-scratch-def.json

{ “orgName”: “Acme”, “country”: “US”, “edition”: “Enterprise”, “features”: “MultiCurrency;AuthorApex”, “orgPreferences”: { “enabled”: [“S1DesktopEnabled”, “ChatterEnabled”], “disabled”: [“SelfSetPasswordInApi”] } }

Here is a complete list for the Features and Preferences that you can configure.

Authorizing your DX Project

sfdx force:auth:web:login --setalias my-sandbox

This will open the browser and you can login

sfdx force:org:open  --path one/one.app

For JWT and CI

sfdx force:auth:jwt:grant --clientid 04580y4051234051 --jwtkeyfile /Users/jdoe/JWT/server.key --username jdoe@acdxgs0hub.org --setdefaultdevhubusername --setalias my-hub-org --instanceUrl https://test.salesforce.com

Creating

Create a Lightning Component / Apex Controller / Lightning Event at command line:

sfdx force:lightning:component:create -n TokenBalance -d force-app/main/default/aura

 

Testing

sfdx force:apex:test:run --classnames TestA,TestB –-resultformat tap –codecoverage

Packaging

First convert from Salesforce DX format back to Metadata API format

sfdx force:source:convert --outputdir mdapi_output_dir --packagename managed_pkg_name

Deploy to the packaging org

sfdx force:mdapi:deploy --deploy_dir mdapi_output_dir --targetusername me@example.com

Creating a Beta version

  1. Ensure that you’ve authorized the packaging org.
  2. sfdx force:auth:web:login --targetusername me@example.com
  3. Create the beta version of the package.
  4.  sfdx force:package1:version:create --packageid package_id --name package_version_name

Managed-Released Version

Later, when you’re ready to create the Managed – Released version of your package, include the -m (–managedreleased true) parameter

sfdx force:package1:version:create --packageid 033xx00000007oi --name ”Spring 17” --description ”Spring 17 Release” --version 3.2 –managedreleased

After the managed package version is created, you can retrieve the new package version ID using force:package1:version:list

Installing a package

sfdx force:package:install --packageid 04txx000000069zAAA –targetusername –installtionkey

sfdx force:package1:version:list --

(use a bitcoin/ethereum/token address as the package installation key)

Check to see if has paid and if it has, it installs.

Push and Pull

sfdx force:source:push

This is used to push data from your project to a scratch org

It will push the source code to the scratch org you have set as the default, or you can specify an org other than the default by using

–targetusername

-u

.forceignore – Any source file or directory that begins with a “dot,” such as .DS_Store or .sfdx, is excluded by default.

sfdx force:source:pull

This is used to pull data from your scratch org to your project

Setting Aliases

sfdx force:alias:set org1= name org2= name2

To remove an alias, set it to nothing.

 sfdx force:alias:set my-org=

Listing your Orgs

sfdx force:org:list

(D) points to the default Dev hub username

(U) points to the default scratch org username

Retrieving and Converting Source from a Managed Package

Working with Metadata has usually required a tool like ANT. Now you can retrieve unmanaged and managed packages into your Salesforce DX project. This is ideal if you have a managed package in a packaging org. You can retrieve that package, unzip it to your local project, and then convert it to Salesforce DX format, all from the CLI.

Essentially you can take your existing package and re shape it into a Salesforce DX Project format

Custom Object with subdirectories:

businessProcesses

  • compactLayouts
  • fields
  • fieldSets
  • listViews
  • recordTypes
  • sharingReasons
  • validationRules
  • webLinks

 

Mkdir mdapipkg

sfdx force:mdapi:retrieve -s -r ./mdapipkg -u username -p packagename

sfdx force:mdapi:retrieve -u username  -i jobid

Convert the metadata API source to Salesforce DX project format.

sfdx force:mdapi:convert –rootdir  <retrieve dir name  --outputdir

Additional Things

To get JSON responses to all Salesforce CLI commands without specifying the –json option each time, set the SFDX_CONTENT_TYPE environment variable. export SFDX_CONTENT_TYPE=JSON

Log Levels  –loglevel DEBUG

ERROR WARN   INFO    DEBUG TRACE

To globally set the log level for all CLI commands, set the SFDX_LOG_LEVEL environment variable. For example, on UNIX: export SFDX_LOG_LEVEL=DEBUG

How to setup and build Hyperledger Fabric Blockchain Applications

This is an introduction to how to configure and launch the Hyperledger Fabric v1.0 permissioned blockchain network on an Ubuntu 16.04 Virtual Machine on AWS.

If you want to skip configuring the VM/images check out the IBM Bluemix Fabric managed service: https://console.ng.bluemix.net/catalog/services/blockchain/

Below is the command line steps in addition to links to additional guides on configuring your network.

The first part of this article is focused on the infrastructure layer, Hyperledger Fabric.

The second part of this article is focused on the application layer, Fabric Composer.

You can spin up the latest versions of Fabric V1 and use the shell scripts for standup and teardown of the network in the second part.

The Infrastructure Layer

Creating your VM on AWS

The first thing you are going to do is go to AWS and create a new account:

Go to https://aws.amazon.com/

Create your account and you will need to create a support request to increase the number of EC2 instances you can have.

Once your limit has increased you will launch a new EC2 Virtual Machine using the Launch Wizard.

Choose your Instance Type: c3.large

Go through the configuration screens and add storage to your VM 8/32/64 GB should work.

Once this has been done go to the Launch screen and generate a new key-pair for this Virtual Machine.

You will download the key pair and put it in some folder on your local machine.

Next the instance will launch and you will be able to see on your dashboard.

Copy the URL from your Virtual Machine

Open Git on your machine and go to the directory where you have downloaded the key to your Virtual Machine.

then ssh -i ./yourkeyname.pem ubuntu@yoururlname

once you have sshed in, you are now able to use the Virtual Machine.

The below is the exact steps or you can continue to the more verbose version in the rest of the article.

This is the exact step by step way to do it in less then 20 minutes:


sudo apt-get update
wget https://storage.googleapis.com/golang/go1.7.4.linux-amd64.tar.gz
sudo tar -xvf go1.7.4.linux-amd64.tar.gz
sudo apt-key adv –keyserver hkp://p80.pool.sks-keyservers.net:80 –recv-keys 58118E89F3A912897C070ADBF76221572C52609D
sudo apt-add-repository ‘deb https://apt.dockerproject.org/repo ubuntu-xenial main’
sudo apt-get install -y docker-engine
sudo curl -o /usr/local/bin/docker-compose -L “https://github.com/docker/compose/releases/download/1.11.2/docker-compose-$(uname -s)-$(uname -m)”
sudo chmod +x /usr/local/bin/docker-compose
mkdir ~/fabric-tools && cd ~/fabric-tools
curl -O https://raw.githubusercontent.com/hyperledger/composer-tools/master/packages/fabric-dev-servers/fabric-dev-servers.zip
sudo apt install unzip
unzip fabric-dev-servers.zip
curl -O https://hyperledger.github.io/composer/prereqs-ubuntu.sh
chmod u+x prereqs-ubuntu.sh
./prereqs-ubuntu.sh
cd fabric-tools
./downloadFabric.sh
./startFabric.sh
./createComposerProfile
cd..
git clone https://github.com/hyperledger/composer-sample-networks.git
cp -r ./composer-sample-networks/packages/basic-sample-network/ ./my-network
rm -rf composer-sample-networks
cd my-network
npm install
npm install -g composer-cli
npm install -g composer-rest-server
cd dist
composer network deploy -p hlfv1 -a basic-sample-network.bna -i PeerAdmin
composer-rest-server

Don’t forget to change your security groups to open inbound/outbound traffic!


 

Configuring your Virtual Machine

There are a number of packages that you are going to need to install and configure in your VM.

Install Git, Nodejs and GO

Digital Ocean Guide – Node.js

Installing GO and setting your GOPATH

*** update USE NVM for node and npm, no sudo required, will be important for your fabric composer setup ***

sudo apt-get update

sudo apt-get install git

sudo apt-get install nodejs use nvm

sudo apt-get install npm  use nvm

wget https://storage.googleapis.com/golang/go1.7.4.linux-amd64.tar.gz
$ sudo tar -xvf go1.7.4.linux-amd64.tar.gz
$ sudo mv go /usr/local
cd ubuntu
mkdir yourgodirectoryname
export GOROOT=/usr/local/go 
export GOPATH=$HOME/yourgodirectoryname
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
sudo nano ~/.profile

Add the above three commands to the bttom of the /.profile file

Install Docker 1.12 and Docker-Compose 1.18

Digital Ocean Guide

sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
sudo apt-add-repository 'deb https://apt.dockerproject.org/repo ubuntu-xenial main'
sudo apt-get update

sudo apt-get install -y docker-engine

docker


sudo curl -o /usr/local/bin/docker-compose -L "https://github.com/docker/compose/releases/download/1.11.2/docker-compose-$(uname -s)-$(uname -m)"
sudo chmod +x /usr/local/bin/docker-compose
docker-compose -v

*****At this point you should have a Linux VM fully setup and you are now ready to download the docker images for the various components of your Blockchain network.****

Clone the Hyperledger Fabric v1 Images

You can download the images here and this will show you how to call chaincode directly at the command line. Later on we will use Composer to spin up docker images and write to the chain by deploying a business network archive.

Architecture Overview
All Hyperledger projects follow a design philosophy that includes a modular extensible
approach, interoperability, an emphasis on highly secure solutions, a token-agnostic
approach with no native cryptocurrency, and the development of a rich and easy-touse
Application Programming Interface (API). The Hyperledger Architecture WG has
distinguished the following business blockchain components:
• Consensus Layer – Responsible for generating an agreement on the order and
confirming the correctness of the set of transactions that constitute a block.
• Smart Contract Layer – Responsible for processing transaction requests and
determining if transactions are valid by executing business logic.
• Communication Layer – Responsible for peer-to-peer message transport between
the nodes that participate in a shared ledger instance.
• Data Store Abstraction – Allows different data-stores to be used by other modules.
• Crypto Abstraction – Allows different crypto algorithms or modules to be swapped
out without affecting other modules.
• Identity Services – Enables the establishment of a root of trust during setup of a
blockchain instance, the enrollment and registration of identities or system entities
during network operation, and the management of changes like drops, adds, and
revocations. Also, provides authentication and authorization.
• Policy Services – Responsible for policy management of various policies specified
in the system, such as the endorsement policy, consensus policy, or group
management policy. It interfaces and depends on other modules to enforce the
various policies.
• APIs – Enables clients and applications to interface to blockchains.
• Interoperation – Supports the interoperation between different blockchain instances.

 

SOURCE: https://www.hyperledger.org/wp-content/uploads/2017/08/HyperLedger_Arch_WG_Paper_1_Consensus.pdf

Getting Started

This is the Fabric Getting Started Guide and here is the end-to-end guide on Github

cd ubuntu
cd yourgodirectoryname
mkdir src
cd src
mkdir github.com
cd github.com
mkdir hyperledger
cd hyperledger
git clone https://github.com/hyperledger/fabric.git
sudo apt install libtool libltdl-dev

Download the Docker Images for Fabric v1.0

cd fabric
make release-all
make docker
docker images

You should see your docker images    (this is for x86_64-1.0.0-alpha2)

REPOSITORY TAG IMAGE ID CREATED SIZE
dev-peer0.org1.example.com-marbles-1.0 latest 73c7549744f3 6 days ago 176MB
hyperledger/fabric-couchdb latest 3d89ac4895f9 12 days ago 1.51GB
hyperledger/fabric-couchdb x86_64-1.0.0-alpha2 3d89ac4895f9 12 days ago 1.51GB
hyperledger/fabric-ca latest 86f4e4280690 12 days ago 241MB
hyperledger/fabric-ca x86_64-1.0.0-alpha2 86f4e4280690 12 days ago 241MB
hyperledger/fabric-kafka latest b77440c116b3 12 days ago 1.3GB
hyperledger/fabric-kafka x86_64-1.0.0-alpha2 b77440c116b3 12 days ago 1.3GB
hyperledger/fabric-zookeeper latest fb8ae6cea9bf 12 days ago 1.31GB
hyperledger/fabric-zookeeper x86_64-1.0.0-alpha2 fb8ae6cea9bf 12 days ago 1.31GB
hyperledger/fabric-orderer latest 9a63e8bac1f5 12 days ago 182MB
hyperledger/fabric-orderer x86_64-1.0.0-alpha2 9a63e8bac1f5 12 days ago 182MB
hyperledger/fabric-peer latest 23b4aedef57f 12 days ago 185MB
hyperledger/fabric-peer x86_64-1.0.0-alpha2 23b4aedef57f 12 days ago 185MB
hyperledger/fabric-javaenv latest a9ca2c90a6bf 12 days ago 1.43GB
hyperledger/fabric-javaenv x86_64-1.0.0-alpha2 a9ca2c90a6bf 12 days ago 1.43GB
hyperledger/fabric-ccenv latest c984ae2a1936 12 days ago 1.29GB
hyperledger/fabric-ccenv x86_64-1.0.0-alpha2 c984ae2a1936 12 days ago 1.29GB
hyperledger/fabric-baseos x86_64-0.3.0 c3a4cf3b3350 4 months ago 161MB

Create your network artifacts

cd examples
cd e2e_cli
./generateArtifacts.sh <channel-ID>

Make sure to choose a channel name for <channel-ID> ie mychannel

then the below to launch the network

CHANNEL_NAME=<channel-id> TIMEOUT=<pick_a_value> docker-compose -f docker-compose-cli.yaml up -d

or use this script to launch the network in one command.

./network_setup.sh up <channel-ID> <timeout-value>
docker ps

To manually run and call the network use

Open the docker-compose-cli.yaml file and comment out the command to run script.sh. Navigate down to the cli container and place a # to the left of the command. For example:

  working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
# command: /bin/bash -c './scripts/script.sh ${CHANNEL_NAME}; sleep $TIMEOUT'

Save the file and return to the /e2e_cli directory.

# Environment variables for PEER0
CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/peer/peer0/localMspConfig
CORE_PEER_ADDRESS=peer0:7051
CORE_PEER_LOCALMSPID="Org0MSP"
CORE_PEER_TLS_ROOTCERT_FILE=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/peer/peer0/localMspConfig/cacerts/peerOrg0.pem
docker exec -it cli bash
peer channel join -b yourchannelname.block
peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02
# remember to preface this command with the global environment variables for the appropriate peer
# remember to pass in the correct string for the -C argument.  The default is mychannel
peer chaincode instantiate -o orderer0:7050 --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem -C mychannel -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c '{"Args":["init","a", "100", "b","200"]}' -P "OR ('Org0MSP.member','Org1MSP.member')"
peer chaincode invoke -o orderer0:7050  --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem  -C mychannel -n mycc -c '{"Args":["invoke","a","b","10"]}'
peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'

You now should have a functioning network that you can call.

To Setup and use Couch DB for richer queries follow the tutorial here.

To clean and bring down the network use 

./network_setup.sh down
sudo docker ps

you shouldn’t see any containers running.

To stop individual various containers use

sudo docker stop <name>

sudo docker rm <name>

To learn about the Hyperledger Fabric API visit http://jimthematrix.github.io/ This is a great resource for learning about the intricacies of the network and the different certificates needed for trusted transactions.

Below is the second part the Application Layer, Fabric Composer.

The Application Layer

You have configured and setup a VM, you have your docker containers running, your Hyperledger Fabric V1.0 blockchain infrastructure is live; now you want to model, build and deploy a business application on this network. This article will show you exactly how to wire up the application layer and the infrastructure layer.

You can run the network locally using npm or docker

  npm install -g composer-playground
 docker run -d -p 8080:8080 hyperledger/composer-playground

Once you have configure your business network you can export it in a BNA (business network archive file).

 

The Business Network

Permissioned blockchain technology applications built for the enterprise and commercial applications need an abstraction layer. This is provided by Fabric Composer; a toolset and application framework that enables you to quickly model and deploy applications to Fabric infrastructure. It is a framework that enables you to reason about the Participants, the Assets, and the Transaction logic that drive state changes on your distributed ledger. This business logic and processes are what will drive the distributed state changes to the peers on your network.

Hyperledger Fabric Composer

Fabric Composer is an open-source project and part of the Hyperledger Foundation.

First, ssh into the same VM you setup.

ssh -i ./yourkeyname.pem ubuntu@yoururlname

Installing Fabric Composer Dev Tools

You should have the majority of these tools installed in your configured VM but this script will make sure everything is correct versions and you haven’t missed anything.

curl -O https://hyperledger.github.io/composer/prereqs-ubuntu.sh

chmod u+x prereqs-ubuntu.sh
./prereqs-ubuntu.sh
  1. To install composer-cli run the following command:
    npm install -g composer-cli
    

    The composer-cli contains all the command line operations for developing business networks.

  2. To install generator-hyperledger-composer run the following command:
    npm install -g generator-hyperledger-composer
    

    The generator-hyperledger-composer is a Yeoman plugin that creates bespoke applications for your business network.

  3. To install composer-rest-server run the following command:
    npm install -g composer-rest-server
    

    The composer-rest-server uses the Hyperledger Composer LoopBack Connector to connect to a business network, extract the models and then present a page containing the REST APIs that have been generated for the model.

  4. To install Yeoman run the following command:
    npm install -g yo
    

    Yeoman is a tool for generating applications. When combined with the generator-hyperledger-composer component, it can interpret business networks and generate applications based on them.

If you use VSCode, install the Hyperledger Composer VSCode plugin from the VSCode marketplace. There is also a plugin for Atom as well.

Make sure you have the V1 images, if you have any old ones use:

sudo docker rmi <image> --force
sudo docker container prune

Fabric Tools 

mkdir ~/fabric-tools && cd ~/fabric-tools

curl -O https://raw.githubusercontent.com/hyperledger/composer-tools/master/packages/fabric-dev-servers/fabric-dev-servers.zip
sudo apt install unzip
unzip fabric-dev-servers.zip
export FABRIC_VERSION=hlfv1

To download Hyperledger-Fabric use the script below:

cd ~/fabric-tools
./downloadFabric.sh
sudo docker images
hyperledger/fabric-ca x86_64-1.0.1 5f30bda5f7ee 2 weeks ago 238MB
hyperledger/fabric-couchdb x86_64-1.0.1 dd645e1e92c7 2 weeks ago 1.48GB
hyperledger/fabric-orderer x86_64-1.0.1 bbf2708c9487 2 weeks ago 179MB
hyperledger/fabric-peer x86_64-1.0.1 abb05def5cfb 2 weeks ago 182MB
hyperledger/fabric-ccenv x86_64-1.0.1 7e2019cf8174 2 weeks ago 1.29GB

You can start Hyperledger Fabric using this script:

./startFabric.sh

if you run:

sudo docker ps

You should see your network is up and running

Create and Connect your Hyperledger Composer Profile

Hyperledger Fabric is distinguished as a platform for permissioned networks, where all participants have known identities.

UPDATE: ID Cards

The below line should be a huge ah-hah moment.

**********An ID Card contains an Identity for a single Participant within a deployed business network. ************

You have defined a Participant in your business network (modeled, but also actually created a record of one) given the participant an ID (id, name email) and now; the above. The ID Card contains an Identity for that single Participant within the deployed business network.

It is like your auth method. You create the participant on the network and then you can go and create an ID that lets you sign as as that participant. Then create another participant, maybe same type, different type; then create another ID for that one.

*** Command Line – Create Profile ***

Issue from the fabric-tools directory ./createComposerProfile.sh

This script will create an PeerAmin Profile for you.

Connection profiles are a bit confusing and can be frustrating to setup, but, this is an integral part to being able to build and deploy your business network.

There are two folders on your machine:

cd ~/.composer-credentials

and

cd ~/.composer-connection-profiles

When you run Composer locally using Docker containers the Profiles you create will be stored there.

You can find you and PeerAdmin Profile pub, priv keys in the composer-credentials directory.

You then have to import you Composer Profile using this command:

$ composer identity import -p hlfv1 -u PeerAdmin -k
/c/Users/domst/.composer-credentials/114aab0e76bf0c78308f89efc4b8c9423e31568da0c340ca187a9b17aa9a4457-priv
 -c /c/Users/domst/.composer-credentials/114aab0e76bf0c78308f89efc4b8c9423e31568da0c340ca187a9b17aa9a4457-pub

OR

composer identity import -p hlfv1 -u PeerAdmin -c ${HOME}/.composer-credentials/114aab0e76bf0c78308f89efc4b8c9423e31568da0c340ca187a9b17aa9a4457-pub -k ${HOME}/.composer-credentials/114aab0e76bf0c78308f89efc4b8c9423e31568da0c340ca187a9b17aa9a4457-priv

You should get a Command Successful Message.

You may run into a problem command if your setup is still sudoed out, so you will need to check and see using

ls -la /home/ubuntu/.composer-credentials

Then if it is run

sudo chown -R ubuntu /home/ubuntu/.composer-credentials

Network Teardown

Also, here are some more scripts more to stop and tear down the infrastructure layer.

./stopFabric.sh

At this point you can go should be ready to get dialed in and ready to model out and wire up a business network. Grab some coffee, a nice glass of water, new piece of gum; you’re just about to get going. This next part we are going to connect a Sample Business Network to your Hyperledger Fabric V1 Blockchain.

To start over do this

./teardownFabric.sh

Or continue On… to connect a Sample Business Network to your Hyperledger Fabric V1 Blockchain.

Building Your Business Network

This section isn’t mandatory but if you want to use the playground editor this is some background on how to to access it in the browser Or you can skip this and use vim.

Make sure fabric composer is running, your security groups have inbounds and outbounds open and go to your amazon web services url:

then go to the extension for composer editor.

http://yourURL.compute.amazonaws.com:8080/editor

Model your Business Network using the Hyperledger Composer Playground. A Hyperledger Composer Business Network Definition is composed of a set of model files and a set of scripts.  This can be run in the browser or locally using a docker container.

Modeling a Business Network consist of :

  • Participants – Members of the Business Network
  • Assets –
  • Transactions – State change mechanism of the Network

You also are able to use:

  • Concepts
  • enum
  • abstract

Lastly, the Business Network can define:

  • Events – Defined indicators that can be subscribed to by external systems
  • Access Control – Permissions for state changes

You can use Javascript to define the transactional logic that you would like to use in your applications. We are just going to use a Sample Business Network, this can be edited and redeployed to update the Blockchain.

Clone a sample network using this repo:

git clone https://github.com/hyperledger/composer-sample-networks.git
cd packages

cd basic-sample-network

npm install

You should get something like this:

Creating Business Network Archive

Looking for package.json of Business Network Definition
Input directory: /home/ubuntu/my-network

Found:
Description: The Hello World of Hyperledger Composer samples
Name: my-network
Identifier: my-network@0.1.8

Written Business Network Definition Archive file to
Output file: ./dist/my-network.bna

Command succeeded

 

Deploying Your Business Network To Your Blockchain Infrastructure

Once you have your Sample Business Network you are going to want to create a BNA file. A BNA is the Business Network Archive, it is a file that describes your network configuration and application and can be deployed the the infrastructure you have setup. To deploy your network use:

composer network deploy -a my-network.bna -p hlfv1 -i PeerAdmin -s randomString

TROUBLESHOOTING:

✖ Deploying business network definition. This may take a minute…

Error: Error trying deploy. Error: Error trying install chaincode. Error: Failed to deserialize creator identity, err ParseCertificate failed asn1: structure error: tags don’t match (2 vs {class:0 tag:6 length:7 isCompound:false}) {optional:false explicit:false application:false defaultValue:<nil> tag:<nil> stringType:0 timeType:0 set:false omitEmpty:false} @2
Command failed

Stack Overflow Answer

Setting up our REST Server

Hyperledger Fabric v1.0 provides basic API using Protocol Buffers over gRPC for applications to interact with the blockchain network. Composer enables you to create REST server and communicate with the Blockchain network using JSON. This part is important and you should have an understanding of it before starting on modeling out your business network / application. Composer enables you to create a REST server that is dynamically generated based on the business network participants and asset you design. You can configure your network and launch a REST Api that can be called from other applications.

composer-rest-server

Wiring It All Up

Once your business Network is deployed to your infrastructure you should be able to verify it using the top right, your connection profile is connected. You are now ready to send transactions to the blockchain directly from the composer interface or by calling your REST server from a client.

If you are running into any trouble feel free to reach out at dominic@dapps.ai

Versions:

https://gateway.ipfs.io/ipfs/QmXMaNJRGstyib8iAZCZ5HxShorCxCDFDzeG2Hx73hTYcp

5.26 – using v1.0

https://gateway.ipfs.io/ipfs/QmWtaoMGA8SVc7kdjUnRP5hS1KsX51TEYH57VNTPJWQPFh

5.27 – using v1.0 alpha2 docker images and added in fabric composer (current)

https://gateway.ipfs.io/ipfs/Qme6FT3HQzUsY8z69P6KCutaUogmc4BBxCSPJDt3czaM6M

 

IPLD Resolvers

This was in my drafts from a few months back, right when I got really excited about IPFS, content addressing data, and the potential future applications that will be built on the protocol.

Content addressing is a true computational advancement in the way that we think about adding and retrieve content on the web. We can take existing databases and use the various parts of the IPFS protocol to build clusters of nodes that are serving content in the form of IPLD structures.

IPLD enables futureproof, immutable, secure graphs of data the essentially are giants Merkle Trees. The advantages of converting a relational database or key value db into a merkle tree are endless.

The data and named links gives the collection of IPFS objects the structure of a Merkle DAG — DAG meaning Directed Acyclic Graph, and Merkle to signify that this is a cryptographically authenticated data structure that uses cryptographic hashes to address content.

This protocol enables an entirely new way to search and retrieve data. It is no longer about where a file is located on a website. It is now about what exact piece of data you are looking for. If I send you an email with a link in it and 30 days later you click the link, how can you be certain that the data you are looking at is the same as what I original sent you? You can’t.

With IPLD you can use content addressing to know for certain that a piece of content has not changed. You can can traverse the IPLD object and seamlessly pick out piece of the data. By using IPLD once that data is locally cached you can use the application offline.

You can work on an application with other peers around you as well using a shared state. But why is this is important for businesses?

Content addressing for companies will ensure a number of open standards. We can now take fully encrypted private networks that are content addressed.

This is a future proof system. Hashes that are currently used in databased can be broken but now we have multi-hashing protocols.

We can build blockchains that use IPLD and libp2p.

The IPLD resolver uses two main pieces; the bitswap and the blocks-service.

Bitswap is transferring blocks and blocks-services is determining what needs to be fetched based on what is currently in the local cache and what needs to be fetched. This prevents duplication and increase efficiencies in the system.

We will be creating a resolver for the enterprise that enables them to take their existing database tables and convert them into giant merkle trees that are interoperable. IPLD is like a global adapter for cryptographic protocols.

Creating the Enterprise Forrest

Here is where it gets interesting. The enterprise uses a few different types of database; relational SQL databases, noSQL distributed databases.

Salesforce is a another type of database that we can take and convert into a Merkle Tree.

S3, is another type of database,

I call this tables to trees.

We are going to take systems that are onPrem siloed or centralized to a cloud provider and turn them into merkle trees using IPLD content addressing.

The IPLD resolver is an internal DAG API module:

We can create a plug and play system of resolvers. This is where a company can take their existing relational database and keep it.

We will resolve the database and run a blockchain in parallel. This blockchain will be built using two protocols that are from the IPFS project: ipld and libp2p

The IPLD Resolver for the enterprise will consist of

.put

.get

.remove

.support.add

.support.rm

We will take any enterprise database and build out the content addressed merkle tree on it.

Content Addressing enterprise content on IPFS-Clusters

Enterprise can consist of 10,000 nodes 50,000 nodes 100,000 nodes and the IPLD object has to be under 1 MB.

That all will be hosting the merkle tree mirror of the relational database.

This can also enable offline operation for the network. Essentially they have their own protocol that is mirroring their on-premise system.

We will be starting with dag-mySQL, dag-noSQL, dag-apex

The MySQL hash function that exist on the on premise system stays when implemented. If that hash is ever broken there is no way to upgrade the system with up completely migrating it.

This makes data migration a lot easier or not even necessary in the future. Once the data is content addressed and creates the merkle tree, we can then start traversing the data.

We will also build interfaces that can interact with the IPLD data

IPLD is a format that can enable version control. The resolver will essentially take any database, any enterprise implementation, and convert it into a merkle tree.

We are essentially planting the seeds (product) and watering them (services). Once these trees are all in place that can communicating because they are all using the same data format.

Future Proofing your Business

We are creating distributed, authenticated, hash-linked data structures. IPLD is a common hash-chain format for distributed data structures.

Each node in these merkle trees will have a unique Content Identified – a format for these hash-links.

This is a database agnostic path notation any hash – any data format.

This will have a multihash – multiple cryptographic hashes, multicodec – multiple serialization formats, multibase – multiple base encodings 

Again, why is the important for businesses? The most important is transparency and security, this is a tamper proof, tamper evident database that can be shared, traversed, replicated, distributed, cached, encrypted and you know now exactly WHAT you are linking to, not where. You know which hash function to verify with. You know what base it is in.

This new protocol enables cryptographic systems to interoperate over a p2p network that serves hash linked data.

IPFS 0.4.5 includes that dag command that can be used to traverse IPLD objects.

Now to write the dag-sql resolver.

Take any existing relational database and you can now traverse that database content addressing.

Content Addressing your database to a cluster of IPFS Cluster nodes on a private encrypted network.

Deterministic head of the cluster then writes new entries.

We use the Ethereum network to assign a key pair for your users to leverage the mobile interface. You can sign in via fingerprint or by facial recognition using the Microsoft Cognitive Toolkit. Your database will run in parallel , you will keep your on premise system and have a content addressed. Content Addressing a filesystem or any type of database creates a Merkle DAG. With this Merkle DAG we can format your data in a way that is secure, immutable, tamper proof, futureproof and able to communicate with other underlying network protocols and apllication stacks. We can effectively create a blockchain network out of your exisiting database the runs securely on a cluster of p2p nodes. I am planting business merkle dag seeds in the merkle forrest. Patches of these forrest will be able to communicate with other protocols via and hash and in any format.

This is the way that the internet will work going into the future. A purely decentralized web of business trees.

 

Continued:

On Graph Data Types

DAG:

In a graph model, each vertex consists of:

  • A unique identifier
  • a set of outgoing edges
  • a set of incoming edges
  • a collection of properties (key-value pairs)

Each edge consists of:

  • A unique identifier
  • The vertex at which the edge starts (the tail vertex)
  • The vertex at which the edge ends ( the head vetex)
  • A label to describe the kind of relationahship between the two vertices
  • A collection of properties (key-value pairs)

Important aspects of the model:

Any vertex can have an edge connecting it with an other vertex. There is no schema that restricts which kinds of things can or cannot be associated.

Given an vertex, you can efficiently find both its incoming and its outgoing edges, and thus traverse the graph – ie. follow a a path through a chain of vertices- both forward and backward. ie ie why you can traverse a hashed blockchain with a resolver.

By using different lables for different kinds of relationships, you can store several different kinds of information in a single graph, while still maintaining a clean data model.

 

 

Web3, Solc, IPFS and IPLD

Web3: JavaScript API for interacting with the Ethereum Virtual Machine

Solidity: Smart Contract Programming Language

IPFS: Distributed File System

This is a quick introduction to calling the Ethereum Virtual Machine using the web3 API, compiling Solidity Smart Contracts, and traversing content addressed data structures on the Interplanetary File System.

These are some of the core technologies that will be used to build  Ðapps.

npm install web3 –save

npm install solc –save

npm install ipfs-api –save

The Ethereum JS Util Library- “ethereumjs-util”: “4.5.0”

The Ethereum JS Transaction Library – “ethereumjs-tx”: “1.1.2”

The first thing we need to do is get testrpc up and running. Depending on the type of machine you have this could be rather straight forward or it may take a while. The link below should point you in the right direction.

Configure testrpc

Sending Transactions

Once your test Ethereum Node is running, instantiate a new web3 object. This can be done by doing the following:

Start up node in the console:

var Web3 = require(“web3”)

var web3 = new Web3(new Web3.providers.HttpProvider(http://localhost:8545))

We can now call different web3 APIs. Test out the connection to TESTRPC by calling:

web3.eth.accounts

You will see all 10 accounts generated from the TESTRPC.

web3.eth.accounts[0]

You will see the address for the first address generated from the TESTRPC.

web3.eth.getBalance(web3.eth.accounts[0])

You should be returned the balance address from your first TESTRPC address.

Converting from Wei to Ether

web3.fromWei(web3.eth.getBalance(web3.eth.accounts[0]), ‘ether’).toNumber()

var acct1 = web3.eth.accounts[0]

var acct 2 = web3.eth.accounts[1]
var balance = (acct) => { return web3.fromWei(web3.eth.getBalance(acct), ‘ether’).toNumber() }
balance(acct1)

balance(acct2)

Send an Ethereum Transaction

web3.eth.sendTransaction({from: acct1, to: acct2, value: web3.toWei(1, ‘ether’), gasLimit: 21000, gasPrice: 2000000000 nonce: })

Send a Raw Ethereum Transaction

var EthTx = require(“ethereumjs-tx”)

var pKey1x = new Buffer(pKey1, ‘hex’)
pkey1x

var rawTx = {

nonce: web3.toHex(web3.eth.getTransactionCount(acct1)),

to: acct2,

gasPrice: web3.toHex(2000000000),

gasLimit: web3.toHex(21000),

value: web3.toHex(web3.toWei(25, ‘ether’)),

data: “”}

var tx = new Ethtx(rawTc0)

tx.sign(pKey1x)

tx.serialize().toString(‘hex’)

web3.eth.sendRawTransaction(Ox${tx.serialize().toString(‘hex’)}`, (error, data) => {

if(!error) { console.log(data) }

})

Creating Smart Contracts

var Web3 = require(“web3”)

var solc = require(“solc”)

var web3 = new Web3 (new Web3.providers.HttpProvider(http://localhost:8545))

Create a source variable with your Solidity code: (This can be any smart contract code, unless you are trying to import in another contract at the top).

var source = `{ contract Messenger {

function displayMessage() constant returns (string) {

return "This is the message in the smart contract";

}

}`
var compiled = solc.compile(source)
source

You can then start to unpack the different parts of the compiled contract:

compiled.contracts.Messenger

compiled.contracts.Messenger.bytecode

compiled.contracts.Messenger.opcodes

compiled.contracts.Messenger.interface
var abi = JSON.parse(compiled.contracts.Messenger.interface)

In order to deploy the contract to the network you will need the JSON Interface of the Contract, the abi (application binary interface).

var messengerContract = web3.eth.contract(abi)
messengerContract
var deployed = messengerContract.new({

... from: web3.eth.accounts[0],

... data: compiled.contracts.Messenger.bytecode,

... gas: 470000,

... gasPrice: 5,

... }, (error, contract) => { })

You should now see in the testrpc the transaction broadcasted to the network.

Web3.eth.getTransaction(“0x0”)

Call a Function in the Contract

deployed.displayMessage.call()

IPFS  – Hashlink Technology

IPFS defines a new paradigm in the way that we can organize, traverse, and retrieve data using a p2p network that serves hash linked data. This is done by using merkle-links between files that are distributed on the interplanetary file system.

2017-02-14 07_31_46-ipfs_js-ipfs_ IPFS implementation in JavaScript.png

Content-addressing enables unique identifies and unique links between data that lives on the distributed network. It creates distributed, authenticated, hash-linked data structures. This is achieved by Merkle-Links and Merkle-Paths.

A merkle-link is a link between two objects which is content-addressed with the cryptographic hash of the target object, and embedded in the source object. This is the way  the Bitcoin and Ethereum blockchains work, they are both essentially giant merkle trees; one with blocks of ordered transactions one with computational operations driving state changes.

IPLD – Interplantary Linked Data 

IPLD is a common hash-chain format for distributed data structures. This creates a database agnostic path notation.

any hash –> any data format

This enables cryptographic integrity checking and immutable data structures. Some of the properties of this are longterm archival, versioning, and distributed  mutable state.

It shifts the way you think about fetching content. Content addressing is What vs Where.

A “link” as represented as a JSON object is comprised of the link and the link value:

{ “/” :        “ipfs/0x0 “ }

Link Key             Link Value

Other properties of the link can be defined in the json object as well.

What if we want to create a more dynamic link object. This can be achieved by using a merkle-path.

A Merkle is a unix-style path which initially dereferences through a merkle-link and allows access of elements of the referenced node and other nodes transitively.

This means that you can design an object model on top of IPLD that would be specialized for file manipulation and have specific path algorithms to query this model.

This would look like:

/ipfs/0x0/a/b/c/d

The protocol the hash of the linked object and the traversal

Here are some examples of traversals with the the link JSON object.

2017-01-30-01_30_48-specs_ipld-at-master-%c2%b7-ipld_specs2017-01-30-01_32_52-zoomit-zoom-window2017-01-30-01_33_40-zoomit-zoom-window2017-01-30-01_34_50-zoomit-zoom-window2017-01-30-01_36_18-zoomit-zoom-window2017-01-30-01_37_51-zoomit-zoom-window

2017-02-21-06_22_46-zoomit-zoom-window

This is all essentially a JSON structure for traversing files, navigating through the IPLD object, walking along the ipfs hash to pull arbitrary data that is nested in these data structures.

CID: Content Identifier – format for hash-links

Multihash –  multiple cryptographic hashes

Multicodec  – multiple serialization formats

Multibase – multiple base encodings

2017-02-21 06_09_27-Juan Benet_ Enter the Merkle Forest - YouTube

This is very powerful because we can use Content Identifiers to traverse different crytographic protcols: Bitcoin, Ethereum, ZCash, Git.

We could also link from crypto to Salesforce or a relational DB by using content addressing.

The paths between these now disparate systems can be resolved by using this uniform, immutable, distributed file system.dsContent addressing is a true computational advancement in the way that we think about adding and retrieve content on the web. We can take existing databases and use the various parts of the IPFS protocol to build clusters of nodes that are serving content in the form of IPLD structures.

IPLD enables futureproof, immutable, secure graphs of data the essentially are giants Merkle Trees. The advantages of converting a relational database or key value db into a merkle tree are endless.

The data and named links gives the collection of IPFS objects the structure of a Merkle DAG — DAG meaning Directed Acyclic Graph, and Merkle to signify that this is a cryptographically authenticated data structure that uses cryptographic hashes to address content.

This protocol enables an entirely new way to search and retrieve data. It is no longer about where a file is located on a website. It is now about what exact piece of data you are looking for. If I send you an email with a link in it and 30 days later you click the link, how can you be certain that the data you are looking at is the same as what I original sent you? You can’t.

With IPLD you can use content addressing to know for certain that a piece of content has not changed. You can can traverse the IPLD object and seamlessly pick out piece of the data. By using IPLD once that data is locally cached you can use the application offline.

You can work on an application with other peers around you as well using a shared state. But why is this is important for businesses?

Content addressing for companies will ensure a number of open standards. We can now take fully encrypted private networks that are content addressed.

This is a future proof system. Hashes that are currently used in databased can be broken but now we have multi-hashing protocols.

We can build blockchains that use IPLD and libp2p.

The IPLD resolver uses two main pieces; the bitswap and the blocks-service.

Bitswap is transferring blocks and blocks-services is determining what needs to be fetched based on what is currently in the local cache and what needs to be fetched. This prevents duplication and increase efficiencies in the system.

We will be creating a resolver for the enterprise that enables them to take their existing noSQL key value and convert them into giant merkle trees that are interoperable. IPLD is like a global adapter for cryptographic protocols.

2017-02-23 04_34_00-Juan Benet_ Enter the Merkle Forest - YouTube

Creating the Enterprise Forrest

We can create a plug and play system of resolvers for IPLD.

We will resolve the database and run a blockchain in parallel. This blockchain will be built using two protocols that are from the IPFS project: ipld and libp2p

The IPLD Resolver for the enterprise will consist of

.put

.get

.remove

.support.add

.support.rm

We will take any enterprise noSQL database (as long as there is a key value store) and build out the content addressed merkle tree on it.

Content Addressing enterprise content on IPFS-Clusters

Enterprise can consist of 10,000 nodes 50,000 nodes 100,000 nodes and the IPLD object has to be under 1 MB.

That all will be hosting the merkle tree mirror of the relational database.

This can also enable offline operation for the network. Essentially they have their own protocol that is mirroring their on-premise system.

This makes data migration a lot easier or not even necessary in the future. Once the data is content addressed and creates the merkle tree, we can then start traversing the data.

We will also build interfaces that can interact with the IPLD data

IPLD is a format that can enable version control. The resolver will essentially take any database, any enterprise implementation, and convert it into a merkle tree.

We are essentially planting the seeds (product) and watering them (services). Once these trees are all in place that can communicating because they are all using the same data format.

Future Proofing your Business

We are creating distributed, authenticated, hash-linked data structures. IPLD is a common hash-chain format for distributed data structures.

Each node in these merkle trees will have a unique Content Identified – a format for these hash-links.

This is a database agnostic path notation any hash – any data format.

This will have a multihash – multiple cryptographic hashes, multicodec – multiple serialization formats, multibase – multiple base encodings

2017-02-21 06_07_30-Juan Benet_ Enter the Merkle Forest - YouTube

Again, why is the important for businesses? The most important is transparency and security, this is a tamper proof, tamper evident database that can be shared, traversed, replicated, distributed, cached, encrypted and you know now exactly WHAT you are linking to, not where. You know which hash function to verify with. You know what base it is in.

This new protocol enables cryptographic systems to interoperate over a p2p network that serves hash linked data.

IPFS 0.4.5 includes that dag command that can be used to traverse IPLD objects.

Now to write the dag-sql resolver.

Take any existing relational database and you can now traverse that database content addressing.

Content Addressing your database to a cluster of IPFS Cluster nodes on a private encrypted network.

Deterministic head of the cluster then writes new entries.

2017-02-21 06_25_06-Juan Benet_ Enter the Merkle Forest - YouTube

We use the Ethereum network to assign a key pair for your users to leverage the mobile interface. You can sign in via fingerprint or by facial recognition using the Microsoft Cognitive Toolkit. Your database will run in parallel , you will keep your on premise system and have a content addressed. Content Addressing a filesystem or any type of database creates a Merkle DAG. With this Merkle DAG we can format your data in a way that is secure, immutable, tamper proof, futureproof and able to communicate with other underlying network protocols and apllication stacks. We can effectively create a blockchain network out of your exisiting database the runs securely on a cluster of p2p nodes. This is an IPFS-Cluster. Here are the commands that can add / remove CIDS from peers in the cluster:

 

2017-04-30 10_25_12-ipfs_ipfs-cluster_ Collective pinning and composition for IPFS.

2017-05-02 02_29_29-Zoomit Zoom Window

 

 

 

I am planting business merkle dag seeds in the merkle forrest. Patches of these forrest will be able to communicate with other protocols via and hash and in any format.

This is the way that the internet will work going into the future. A purely decentralized web of business trees.

 

2017-04-17 05_10_20-https___protocol.ai_img_projects_logotype_IPLD.svg