2018

There is a shift that is happening; it is rooted in the rapid expansion of decentralized cryptographic protocols. Public private key pairs are becoming common use for securing/storing digital assets, providing proof, determining truth. Where our data lives, who owns the data, and who can that change data are all questions that are being asked. Collectively we have a distributed consciousness in the internet; there is a small gap from when we wake up to when we check state. This could be the glance of a cryptocurrency price, notifications, categorizing messages; it is our terminal into a digital world that is layered and distributed. We tap into the information layer, we trade intangible digital value waves that we are collectively starting to operate on constantly. The technology is converging and accelerating; there is an interesting shift happening and decentralized technologies are what is driving it.

This past year I learned:

  • How to build Ethereum Blockchain Applications
  • How to build Hyperledger Blockchain Applications
  • How to develop enterprise managed packages on the Salesforce Platform

This is going into the 6th year I have written on my blog. I am at just over 70,000 views all time.

2017-12-26 09_29_08-Stats ‹ domsteil — WordPress.com

Hit a little under 28,000 views this year in total; a 250% increase from last year. This was driven mostly by cryptocurrency and ICOs becoming mainstream and Blockchain becoming top of mind in the enterprise throughout the latter half of the year.

2017-12-31 09_29_10-Stats ‹ domsteil — WordPress.com

 

2017-12-26 09_40_16-Stats ‹ domsteil — WordPress.com

Here are the past couple years of this same blog posts.

2013: A Year in Review

2014

2015

2016:

2017

These were some of my goals from 2017

Focus in 2017

  • Build Dapps
  • Build with Babel, ReactJS, Redux, GraphQL, and Node
  • Keep building Bots and a VR/AR interface library in Unity
  • Learn about Containers, Docker, Kubernetes
  • Learn how to write python scripts with Tensorflow, Theano and Keras libraries
  • Write 21.co computer python scripts
  • Write 1000 Ethereum Smart Contract Clauses
  • Read Applied Cryptography cover to cover with annotations
  • Keep a written Daily Journal and Physical Calendar
  • Say my autosuggestion every morning
  • Drink more water, run, and read everyday (52 books, 365 miles)
  • Get 100K views on the site
  • More water every day
  • Eat Healthy: Chicken, Vegetables, Almonds, Salmon, Brown Rice, Eggs, Protein Powder, Oatmeal

Some of the highlights from 2017:

  • Started a blockchain company
  • Moved to Barcelona
  • New York for the third time for the Consensus 2017 Hackathon and won the awards from IBM Cloud and Wanxiang
  • Went to China for the first time for a Blockchain Conference
  • Presented on Blockchain at Dreamforce
  • Continued to invest in ICOs and trade cryptocurrencies

build, keep, learn, write, read.

The focus in 2018 will be working on the foundational side of a great organization for many years to come. The focus will be on using technology to build great products with a great team.

Focus in 2018

  • Build Dapps
  • Build with ReasonML and ReactJS
  • Work with 200 Customers to start building Dapps
  • Work with people passionate about Crypto/Blockchain + Salesforce
  • Build a great team
  • Go to China First Program
  • Learn Mandarin Every Day on my phone and read the phrase book
  • Run 3 Miles and lift every morning
  • Build something incredible with IPFS
  • Build Dapps NLP Engine from scratch with Prolog
  • Build Lisk Apps and deploy from Salesforce
  • Keep a daily journal and a physical calendar
  • Say my autosuggestion every morning
  • Run, and read everyday
  • Get 150K views on the site
  • Eat Healthy: Chicken, Vegetables, Almonds, Salmon, Brown Rice, Egg Whites, Protein Powder, Tuna, Oatmeal, Almond Milk
  • Drink more water

This year I spent a lot of my time traveling and with that I read and reread some great books:

  • Advanced Apex Programming
  • Building Microservices
  • Designing Data-Intensive Applications
  • Force.com Enterprise Architecture
  • Traction
  • Emotional Agility
  • Learning React
  • The Hard Thing About Hard things
  • Slicing Pie
  • The Reputation Economy
  • Scar Tissue (on s single flight from London – Seattle)
  • Applied Cryptography with Annotations
  • The Startup Playbook
  • The Social Organism
  • Sam Walton’s Made in America
  • Think and Grow Rich
  • Ponder on This

This year I really got into Numerology. I would start to use numerology for trading cryptocurrencies and a number of other things. I think that it is something that is part of the other side of the coin, the metaphysical stuff we don’t talk about; I find very interesting.

Virtualization and Distributed Microservices

  • Microservices
  • The Art of Capacity Planning
  • ZooKeeper
  • Cryptography infosec
  • Docker Containers

This year I have definitely become more interested in leveraging virtual machines and building using containers. More and more research on decentralized/distributed systems, cryptography, virtualization, microservices, tokenization; these are the technologies leading the shift. Diving into architecture around technologies like Docker Swarm, Kong API Gateway and NGINX, Kafka, ZooKeeper. Learning to use decentralized technologies like Urbit and IPFS. Running and interacting with Bitcoin core. Using Ubuntu VMs to run an Ethereum Node provider or Hyperledger nodes but over all just testing out different software using containers in a virtual machine instance. I think the combination of using git, node, and docker is very powerful for developing web applications. Also learning to use vim was very valuable as well for the VMs. Teams can ssh in, push to shared repos, sync; it is aligned with the open source and distributed nature. I have worked on defining multiple blockchain protocols and how they differ from each other; which ones to focus my time towards and also which ones have the greatest upside long term. The yes or no on whether or not to dive in is always a questions that takes time; I think that one of the technologies that I really went down a rabbit hole with this year was IPFS.

IPFS

I think that decentralized filesystem build with the hash links and content addressing is the future of the internet.

I am really excited to build with IPFS and combine it with other p2p protocols. The shift towards decentralized p2p technologies such as bitcoin and ipfs are part of a shift towards individuals have full ownership, responsibility and control of their valuable digital assets. A digital asset can be a coin; but also personal data, or the metadata about what we do. These sorts of networks are shifting the pendulum back towards personal computing that can still be networked and used however not through a custodian service where we are told to fetch that data from.

The concept of What vs Where a piece of data is:

  • Tamper proof/evident
  • Permanent
  • Content Addressed
  • Immutable
  • Ever Present
  • Traversable
  • Hash Agnostic
  • Interoperable

It is at the crux, a powerful concept.

Mining

I set my miners back up; the price of BTC made it interesting. It didn’t take too long. Just hard to order a few wires from Amazon and they were printing digital gold again. Though not profitable (in dollars at least, it is still creating bitcoins), there is nostalgia from the sound and checking to see how much was created overnight. Personal capital investment in machinery that you own is that creating something is why I like it.

Business

I learned a lot about starting a business over the past year. Everything from startup financing to product development sprints; to equity, vesting, boards, contracts, negotiations. It is all part of it. A few months in I wrote something along the lines of: Starting a company is a mix of assumptions, strategies, emotions, decisions, expectations, risks, costs, disciplines, interests; but above all belief in a vision and a drive fueled by passion and focused persistence. One of the most important things however I believe is timing. When building Dapps.ai this year I spent a lot of time sitting in my studio in Barcelona, drawing out on the back of a physical calendar what an interface could look like, what problem it solved, what technologies it was built on, and how long it would take. I have months and months of these notes and in review, I think timing was the underlying current in and catalyst for everything. At the beginning of the January of 2017, the hedge was that cryptocurrency and blockchain would eventually “go mainstream” and companies would have to adopt a strategy; just when. The different technologies to choose from in time will become self-evident. I wrote down everything I thought would need be of part of or of focus: Bitcoin, Ethereum, ZCash, Solidity, ethercamp, Blockstack, 21.co, Tierion, Lightning, Chain, Corda, Lisk, Hyperledger, Monax, truffle, metamask, web3, zeppelin, next.js, ipfs, swarm, bigchaindb, infura, raiden, string labs, parity, uPort.

(A lot focused on how can one securely prove/sign/send with a pKey when interacting with Ethereum dapps).

“Knowing enough to think you’re right, but not enough to know you’re wrong. What single bit of evidence, what would it take to show you that you’re wrong.” -NDT

Fast forward a year and the overall awareness of the market and industry is second to none, yet still in its infancy on a global scale. With this comes that the realization that it will have to cycle back down (as it is cyclical) and we will have to continue to build upon our learning’s and decide what is the technology that will shape the future. Knowing what technology to focus on and devote time too is probably one of the tougher parts. The other part of this process was how to bring a product to market in a timely manner. Coming up with an MVP, building a package, learning about execution context and limits, putting the package through security review. In addition, it made me think about things like serialization, encryption and decryption, key management for applications, how to price and application and how to market an application. I want to continue to learn how to build on the Salesforce Platform and incorporate many other blockchain technology protocols. Every programming language, technology, platform, syntax; it all builds on each other. Building upon administrative and developer skills on the Salesforce platform and different computer science disciplines in the blockchain space; that is where the core focus is. Event driven applications that can be used with existing customer data, implemented processes and other apps is what drives a lot of the value of the combination; the polyglot persistence.

Crypto

I think looking at the prices of all of these cryptoassets for about 5 years now, you could day trade; but you could also create wallets offline, store the keys in a safe place, send a little of the top 20 coins to each and go and move to some island or forest and don’t touch them.

20170104_151415
coinmarketcap.com on January 4th, 2017

It is a very incredible time to see all of these networks grow in the way they have. The latter half of this year the market growing from 100 billion to over half a trillion is unbelievable.

The groups being formed together to take part of this movement whether on twitter or a chat group is a driving force. In retrospect years from now it will be obvious that decentralized cryptocurrencies, electronic cash, digital tokens would be what shaped the future. I won’t go into any price predictions. I will say that I believe that network function tokens for things like computation/storage, tokens that will enable private transactions, and token driven platforms for decentralized application development; will all continue to become networks that increase in demand and overall value.

2017-12-31 09_11_54-20170906_172535.png ‎- Photos

 

Crypto and Blockchain

There is a polarization distinction between the two sides; crypto and blockchain.

You have crypto drive by token incentivized open projects and you have enterprise blockchain platforms driven by enterprise cash and consortia. I think there is value in both and there will be things that are invented in both that will contribute to the overall development and progression of the technology.

The analogy is like an intranet and the internet. Another good quote was something like you can have an acoustic showing or a rock concert but you can’t have one disguised as the other.

This is a bit the sentiment I feel when hearing questions like what is Blockchain, versus using opensource protocol based digital tokens and their respective blockchains. Similarly developing applications with Ethereum and Hyperledger is too different sides of the brain. One you are thinking in how to sign messages from a browser, gas optimization and price, virtual abstractions that can be tokenized and traded; the other you are thinking in abstractions for business logic, how has access to the distributed infrastructure. One is a singleton that is updated, may become congested, but ultimately is a live and decentralized state machine. Both having a common goal of achieving consensus with the state of a system. Bitcoin on a state of UTXOs, Ethereum on a state of Accounts, Hyperledger on the state of the validity and order of transactions. I think that Proof-of-Work and the largest distribution of public-private key infrastructure to provide a verifiable, censorship resistant informational system; is at its core is the 0 to 1.

  • Decentralization
  • Security
  • Scalability

Again, from all of the above technologies there will continue to be an ecosystem that is growing and ever changing. I started this year using spending time configuring my testrpc, then becoming familiar with web3 and solc npm packages at the command line, and finally understanding how to create and sign Ethereum messages from the browser to deploy Smart Contracts.

Web3, Solc, IPFS

By the end of the year I was able to create an application on the Salesforce AppExchange to do the same; an Ethereum client for the Salesforce platform. Abstract away metalanguage, concepts and create an interface that was dynamic and give companies a way to manage all of the necessary components of these networks. I didn’t focus on or building the killer app with a focus on the existing frameworks. The killer app of blockchains so far has been using and trading digital tokens. The two main networks that’s function have been used are Bitcoin, and Ethereum for creating more tokens. We are just beginning to see dapps other than for the creation of tokens being create ie non-fungible tokens, rarity. Tokenization of digital assets is undoubtedly going to continue to rise. Though we may not be using (most) of these digital tokens in the physical world, in the digital world where we continue to spend more and more of our time these tokens will be used.

Hyperledger

The second half of the year I spent a lot of time learning and building with Hyperledger Fabric and Hyperledger Composer. The main difference is with this technology you are building everything from the ground up. The infrastructure, a blockchain explorer, the business network layer such as participants, assets, transaction logic, the gui to use the netwowrk; it is a different process that spinning up a node app and calling the ropsten testnet. Configuring Dockerfiles spinning up instances, writing the network model; though in JavaScript is mech different then the ideal of deploying Solidity to a singleton and creating a web3 UI to interface with the contract.

Dapps Inc.

I want this company to grow globally; from different geolocations, computer science disciplines and different technologists; and I want to build a company that is built on products, revenue and customers. I am interested in making decentralized technologies like Bitcoin, Ethereum, Hyperledger and IPFS useful for businesses around the world.

This next year at Dapps Inc. my sole focus is on four main things:

  • capital – cryptoasset fund
  • chat – build the village
  • network – nodes and infrastructure
  • solutions – decentralized solutions

I want to continue to learn from a larger community and the great minds in the space.

  • Build the best team of people
  • Build the best products and services
  • Have customers pay us for the products and services.

The ability to use decentralized computing, have processing and decisions made at the edge endpoints of the graph creates the fastest feedback loop. We can now access this edge data and update state to others on a p2p network. With this comes security and network challenges of connecting and processing this next era of computing. The decentralized and distributed era of technology is rapidly becoming reality. I truly enjoy working on this technology. I do believe in the economic incentive mechanisms for peer-to-peer cryptographic protocols and I am all for open source founders that can monetize, and users can become equity owners. That shift is already happening.  It is important to keep building, developing and combining the latest technology, learning every day.

Definiteness of purpose, the knowledge of what one wants, and a burning desire to possess it. I will continue to build the mastermind group and devote energy to drive combinatorial creativity through technology and people.

Happy New Year.

Live the Dream.

-Dom

Thoughts on Bitcoin

Bitcoin is.

The security of bitcoin as a protocol is derived from the quantum properties it uses as a secure informational state machine. The state transitions can be trusted based on pure mathematics.

  • Elliptical Curves
  • Digital Signatures
  • Proof-of-Work

Bitcoin as a system is the most powerful machine / computer in the world.

It has already achieved this in a quick almost 10 years.

What makes it incredibly valuable with respect to the concept of money is that it is fairly easy to write to the the most powerful machine in the world.

State is live.

Transactions that are submitted are picked up by 98% of the hashing power of the machine.

It is a universal singleton.

By definition it is information creation in the purest form.

It is quantum in that a bitcoin address exist in a digital layer thats function is created on a spent output and used on input.

That act of transferring a bitcoin is that which creates the value.

No digital file is transferred.

Bitcoin is simply a namespace for the act of quantum transactional state verification.

It is a wave until that act of spending makes it material. It is untangible yet verdical.

Consensus is secured by gravity and energy.

There is a distribution of energy that is secured by the ability to call and announce state change verification. It’s inputs and outputs on the network.

It is a graph database d = 1.32

Hashlink:

ipfs/QmZHh2KKXTFtdmk5ybV2hmjXZvEpmyXEvj11DnNoguukmF

Interoperable, Scalable Blockchain Protocols

Last week I attended the Wanxiang Global Blockchain Summit in Shanghai. There were a number of sessions but one that really stuck with me was about potential scaling solutions for distributed consensus protocols.

There is a concept of polyglot persistence which essentially means certain databases are meant for certain reasons. You have SQL database, noSQL databases, graph databases, blockchains, distributed fault-tolerant consensus ledgers, CRDTS; all of these different types of protocols. They have different consensus mechanisms, data models, query languages, state change and message ordering models such as validators or orderers; but in order to effectively scale they need to be to interoperate and need to be able to complement each other.

They will need to be able to share account state. They will need to be able to resolve any concurrency issues that arise. They will need to be able to resolve replication and eventually be consistent.

Most of my time is spent working at the application layer but an assumption that I have as that building solutions that will incorporate multiple protocols will be important in the future.

This type of approach is not necessarily found in the blockchain space specifically because of the economic/personal incentives one has if heavily invested in a specific protocol. This could be time learning Solidity versus Kotlin; this could be owning a bunch of Ether vs NEO; this could be working at a company that is heavily contributing to a particular open project.

Regardless of ones preference in blockchain/distributed ledger/consensus database; this past trip to Shanghai validated that taking a multi-blockchain approach and being open to learning the differences between them and how they could also complement each other is going to have a great effect on the way that solutions in the future will be built.

An example of this is the new Ethermint Token.

Ethereum and Tendermint

In what is being called the Shanghai Accord; a new partnership between the two projects was announced in which they will work together to help scale blockchain technology. Tendermint a Proof of Stake consensus protocol and its new Internet of Blockchains Hub / project Cosmos is working closely with Ethereum project to essentially create derivate tokens on Tendermint infrastrucutre in what is being called a hard spoon. Essentially a new coin is minted on another chain (tendermint) that uses the account state or balances of an existing token on a chain (ethereum); and we have Ethermint. This could mean in an effort to provide a scalable cross-blockchain solution in which tokens now have shared state/accountbalances on the Ethereum Chain and the Tendermint Chain.

Content-Addressed Blockchain Resolvers

This is but one approach to have inter-operable blockchains. Other approaches have been proposed historically such as pegged side chains but one to me that seems very interesting is leveraging IPFS at the network the content address and resolve through the different blockchain / databases (as long as there is a key value store) using multihash and the ipld thinwaist protocol. It is still much just concept but again it is aligned with the idea of having multiple chains interoperating to address some of the issues that blockchains are facing today.

Plasma

Another approach is using Plasma; this allows transactions to occur offchain; only broadcasting them to the chain if there is a transaction dispute; you never sent me the money, vice versa). In this way the blockchain becomes an arbiter or a distributed court house; a way to have veridical computing if there is in fact a need for the chain based settlement.

Proof-of-Stake

Another take away  and interesting concept is the movement to proof-of-stake. Ethereum’s movement towards POS is in Vlad Zamfir’s Casper protocol. Other chains such as Lisk uses a dPOS system and Tendermint is on POS as well. A common theme between them is the concept of weight. The stake and the weight of stake has some very interesting implications on the security and threat models that could be introduced to the protocol. More on Proof-of-Stake/Casper in another post.

There have been a few recent cross chain atomic swaps recently aswell.

Litecoin + Decred

https://blog.decred.org/2017/09/20/On-Chain-Atomic-Swaps/

And cross chain transaction component verifications.

ZKSnarks + Ethereum

The Byzantium hard fork went live on Ethereum’s Ropsten testnet at block 1,700,000; one of the features was support for ZKSnarks. On the Ethereum ropsten testnet a a zksnarks component of the Zcash transaction was verified.

Public Chains and Financial Services

Lastly from Nick Szabo’s talk on Smart Contracts the idea of a smart contracts acting a digital distributed vending machine was very interesting. There are certain security guarantees; and the rules are deterministic and known (you put in money into the machine something comes out). I have heard the vending machine analogy from Chain.com as well, the concepts share the same vision of taking the best of public blockchain technology and the best of existing financial services.

 

2017-09-16 01_49_44-Untitled - Notepad

2017-09-20 09_43_59-Dapps.ai Platform - PowerPoint

2017-09-16 02_00_15-Untitled - Notepad

 

Salesforce DX Platform Development

Salesforce DX is a new integrated experience designed for high-performance, end-to-end agile Salesforce development that is both open and flexible.

Before Salesforce DX, all custom objects and object translations were stored in one large Metadata API source file.

Salesforce DX solves this problem by providing a new source shape that breaks down these large source files to make them more digestible and easier to manage with a version control system.

2017-07-06 01_31_57-workspace - Force.com IDE

A Salesforce DX project stores custom objects and custom object translations in intuitive subdirectories. This source structure makes it much easier to find what you want to change or update. Once you have developed and tested your code you can then convert the format of your source so that it’s readable by the Metadata API, package it and deploy  it.

aplx

This is a complete shift from from monolithic, org-based development to modular, artifact-based development while also enabling continuous integration (CI) and continuous delivery (CD). This means development teams can develop separately and build toward a release of the artifact, and not a release of updates to the org.

This will be a working post as I continue to go through the release notes and work with the new platform tools.

Resources

Get Started with Salesforce DX

Salesforce DX Developer Guide

Salesforce CLI Command Reference

Install the Tools:

DX CLI – install here

Force IDE 2 – install using this link and install preequesties Java kit here

Dev Hub (Trial Org) – sign up for a dev hub trial

Create your directory

Open up Git

Cd c:

Mkdir dx

Cd dx

Mkdir projectname

Cd projectname

 

Configuring your DX project

Salesforce DX projects use a particular directory structure for custom objects, custom object translations, Lightning components, and documents. For Salesforce DX projects, all source files have a companion file with the “-meta.xml” extension.

When in the project directory:

sfdx force:project:create -n projectname

ls

you will see a few files.

The project configuration file sfdx-project.json indicates that the directory is a Salesforce DX project.

If the org you are authorizing is on a My Domain subdomain, update your project configuration file (sfdx-project.json)

“sfdcLoginUrl” : “https://somethingcool.my.salesforce.com”

Package directories indicate which directories to target when syncing source to and from the scratch org. Important: Register the namespace with Salesforce and then connect the org with the registered namespace to the Dev Hub org.

Creating your VCS Repo

You can now initiate a git repo and connect it to bitbucket or github.

create bitbucket / github repo

git init
git add .
git commit -am ‘first dx commit’
git push -u origin master

if you go to your repo in the browser you should see the dx json config file and sfdx structure.

cd projectname

Creating a Scratch Org

sfdx force:org:create -s -f config/project-scratch-def.json -a ipfsDXScratch

You don’t need a password but can add one using force:user:password:generate

You can also target a specific dev hub that isnt the default.

sfdx force:org:create --targetdevhubusername jdoe@mydevhub.com --definitionfile my-org-def.json --setalias yet-another-scratch-org

–targerdevhubusername to set a dev org not the default

The Scratch Org

You can have

  • 50 scratch orgs per day per Dev Hub
  • 25 active scratch orgs,
  • They are deleted after 7 days

The scratch org is a source-driven and disposable deployment of Salesforce code and metadata. A scratch org is fully configurable, allowing developers to emulate different Salesforce editions with different features and preferences. And you can share the scratch org configuration file with other team members, so you all have the same basic org in which to do your development.

The project.json file is what makes the project and scratch org configurable. You can now define what type of org, permissions and settings you want to build and test in. It is a way to build and test with certain platform features, application settings and circumstances while also being able to version and have source control.

This also enables continuous integration because you can test against a use case in one of the scratch orgs and see the impact based on the configuration or by importing data from  the source org. Once bulkified and tested it can be pushed back into the source org.

Scratch orgs drive developer productivity and collaboration during the development process, and facilitate automated testing and continuous integration.

Use many scratch org config files – name them . They are the blueprint for the org shape.

Ethereum-scratch-def.json

Production-scratch-def.json

devEdition-scratch-def.json

{ “orgName”: “Acme”, “country”: “US”, “edition”: “Enterprise”, “features”: “MultiCurrency;AuthorApex”, “orgPreferences”: { “enabled”: [“S1DesktopEnabled”, “ChatterEnabled”], “disabled”: [“SelfSetPasswordInApi”] } }

Here is a complete list for the Features and Preferences that you can configure.

Authorizing your DX Project

sfdx force:auth:web:login --setalias my-sandbox

This will open the browser and you can login

sfdx force:org:open  --path one/one.app

For JWT and CI

sfdx force:auth:jwt:grant --clientid 04580y4051234051 --jwtkeyfile /Users/jdoe/JWT/server.key --username jdoe@acdxgs0hub.org --setdefaultdevhubusername --setalias my-hub-org --instanceUrl https://test.salesforce.com

Creating

Create a Lightning Component / Apex Controller / Lightning Event at command line:

sfdx force:lightning:component:create -n TokenBalance -d force-app/main/default/aura

 

Testing

sfdx force:apex:test:run --classnames TestA,TestB –-resultformat tap –codecoverage

Packaging

First convert from Salesforce DX format back to Metadata API format

sfdx force:source:convert --outputdir mdapi_output_dir --packagename managed_pkg_name

Deploy to the packaging org

sfdx force:mdapi:deploy --deploy_dir mdapi_output_dir --targetusername me@example.com

Creating a Beta version

  1. Ensure that you’ve authorized the packaging org.
  2. sfdx force:auth:web:login --targetusername me@example.com
  3. Create the beta version of the package.
  4.  sfdx force:package1:version:create --packageid package_id --name package_version_name

Managed-Released Version

Later, when you’re ready to create the Managed – Released version of your package, include the -m (–managedreleased true) parameter

sfdx force:package1:version:create --packageid 033xx00000007oi --name ”Spring 17” --description ”Spring 17 Release” --version 3.2 –managedreleased

After the managed package version is created, you can retrieve the new package version ID using force:package1:version:list

Installing a package

sfdx force:package:install --packageid 04txx000000069zAAA –targetusername –installtionkey

sfdx force:package1:version:list --

(use a bitcoin/ethereum/token address as the package installation key)

Check to see if has paid and if it has, it installs.

Push and Pull

sfdx force:source:push

This is used to push data from your project to a scratch org

It will push the source code to the scratch org you have set as the default, or you can specify an org other than the default by using

–targetusername

-u

.forceignore – Any source file or directory that begins with a “dot,” such as .DS_Store or .sfdx, is excluded by default.

sfdx force:source:pull

This is used to pull data from your scratch org to your project

Setting Aliases

sfdx force:alias:set org1= name org2= name2

To remove an alias, set it to nothing.

 sfdx force:alias:set my-org=

Listing your Orgs

sfdx force:org:list

(D) points to the default Dev hub username

(U) points to the default scratch org username

Retrieving and Converting Source from a Managed Package

Working with Metadata has usually required a tool like ANT. Now you can retrieve unmanaged and managed packages into your Salesforce DX project. This is ideal if you have a managed package in a packaging org. You can retrieve that package, unzip it to your local project, and then convert it to Salesforce DX format, all from the CLI.

Essentially you can take your existing package and re shape it into a Salesforce DX Project format

Custom Object with subdirectories:

businessProcesses

  • compactLayouts
  • fields
  • fieldSets
  • listViews
  • recordTypes
  • sharingReasons
  • validationRules
  • webLinks

 

Mkdir mdapipkg

sfdx force:mdapi:retrieve -s -r ./mdapipkg -u username -p packagename

sfdx force:mdapi:retrieve -u username  -i jobid

Convert the metadata API source to Salesforce DX project format.

sfdx force:mdapi:convert –rootdir  <retrieve dir name  --outputdir

Additional Things

To get JSON responses to all Salesforce CLI commands without specifying the –json option each time, set the SFDX_CONTENT_TYPE environment variable. export SFDX_CONTENT_TYPE=JSON

Log Levels  –loglevel DEBUG

ERROR WARN   INFO    DEBUG TRACE

To globally set the log level for all CLI commands, set the SFDX_LOG_LEVEL environment variable. For example, on UNIX: export SFDX_LOG_LEVEL=DEBUG

How to setup and build Hyperledger Fabric Blockchain Applications

This is an introduction to how to configure and launch the Hyperledger Fabric v1.0 permissioned blockchain network on an Ubuntu 16.04 Virtual Machine.

If you want to skip configuring the VM/images check out the IBM Blockchain managed service: https://console.ng.bluemix.net/catalog/services/blockchain/

Below is the command line steps in addition to links to additional guides on configuring your infrastructure and deploying a business network.

The first part of this article is focused on the infrastructure layer, Hyperledger Fabric.

The second part of this article is focused on the application layer, Fabric Composer.

The third part of this article is focused on wiring up everything such as linking to Angular Apps, Composer Playground and Importing and Managing Cards.

You can spin up the latest versions of Fabric V1 and use the shell scripts for standup and teardown of the network in the second part.

The Infrastructure Layer

Creating your VM on AWS

The first thing you are going to do is go to AWS and create a new account:

Go to https://aws.amazon.com/

Create your account and you will need to create a support request to increase the number of EC2 instances you can have.

Once your limit has increased you will launch a new EC2 Virtual Machine using the Launch Wizard.

Choose your Instance Type: c3.large (You will be charged for the VM).

Go through the configuration screens and add storage to your VM 8/32/64 GB should work.

Once this has been done go to the Launch screen and generate a new key-pair for this Virtual Machine.

You will download the key pair and put it in some folder on your local machine.

Next the instance will launch and you will be able to see on your dashboard.

Copy the URL from your Virtual Machine

Open Git on your machine and go to the directory where you have downloaded the key to your Virtual Machine.

then ssh -i ./yourkeyname.pem ubuntu@yoururlname

once you have sshed in, you are now able to use the Virtual Machine.

The below is the exact steps or you can continue to the more verbose version in the rest of the article.

This is the exact step by step way to do it in less then 20 minutes:


sudo apt-get update
 wget https://storage.googleapis.com/golang/go1.7.4.linux-amd64.tar.gz
 sudo tar -xvf go1.7.4.linux-amd64.tar.gz
 sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
 sudo apt-add-repository 'deb https://apt.dockerproject.org/repo ubuntu-xenial main'
 sudo apt-get install -y docker-engine
 sudo curl -o /usr/local/bin/docker-compose -L "https://github.com/docker/compose/releases/download/1.11.2/docker-compose-$(uname -s)-$(uname -m)"
 sudo chmod +x /usr/local/bin/docker-compose
 mkdir ~/fabric-tools && cd ~/fabric-tools
 curl -O https://raw.githubusercontent.com/hyperledger/composer-tools/master/packages/fabric-dev-servers/fabric-dev-servers.zip
 sudo apt install unzip
 unzip fabric-dev-servers.zip
 curl -O https://hyperledger.github.io/composer/prereqs-ubuntu.sh
 chmod u+x prereqs-ubuntu.sh
 ./prereqs-ubuntu.sh
 cd fabric-tools
 ./downloadFabric.sh
 ./startFabric.sh
 ./createComposerProfile
 cd..
 git clone https://github.com/hyperledger/composer-sample-networks.git
 cp -r ./composer-sample-networks/packages/basic-sample-network/ ./my-network
 rm -rf composer-sample-networks
 cd my-network
 npm install
 npm install -g composer-cli
 npm install -g composer-rest-server
npm install -g composer-playground composer card create -p connection.json -u PeerAdmin -c Admin@org1.example.com-cert.pem -k 114a -r PeerAdmin -r ChannelAdmin composer card import -f admin@<name>.card composer runtime install -c PeerAdmin@fabric-network -n <network name> composer network start -c PeerAdmin@fabric-network -a <name>.bna -A admin -S adminpw composer network deploy -p hlfv1 -a basic-sample-network.bna -i PeerAdmin composer-rest-server

Don’t forget to change your security groups to open inbound/outbound traffic. If you go to the URL of your VM in the browser after deploying your rest server ie your url/:3000 and don’t get a response you need to edit your security groups.


Configuring your Virtual Machine

There are a number of packages that you are going to need to install and configure in your VM.

Install Git, Nodejs and GO

Digital Ocean Guide – Node.js

Installing GO and setting your GOPATH

*** update USE NVM for node and npm, no sudo required, will be important for your fabric composer setup ***

sudo apt-get update

sudo apt-get install git

sudo apt-get install nodejs use nvm

sudo apt-get install npm  use nvm

wget https://storage.googleapis.com/golang/go1.7.4.linux-amd64.tar.gz
$ sudo tar -xvf go1.7.4.linux-amd64.tar.gz
$ sudo mv go /usr/local
cd ubuntu
mkdir yourgodirectoryname
export GOROOT=/usr/local/go 
export GOPATH=$HOME/yourgodirectoryname
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
sudo nano ~/.profile

Add the above three commands to the bttom of the /.profile file

Install Docker 1.12 and Docker-Compose 1.18

Digital Ocean Guide

sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
sudo apt-add-repository 'deb https://apt.dockerproject.org/repo ubuntu-xenial main'
sudo apt-get update

sudo apt-get install -y docker-engine

docker


sudo curl -o /usr/local/bin/docker-compose -L "https://github.com/docker/compose/releases/download/1.11.2/docker-compose-$(uname -s)-$(uname -m)"
sudo chmod +x /usr/local/bin/docker-compose
docker-compose -v

*****At this point you should have a Linux VM fully setup and you are now ready to download the docker images for the various peers of your Blockchain network.****

Clone the Hyperledger Fabric v1 Images

You can download the images here and this will show you how to call chaincode directly at the command line. Later on we will use Composer to spin up docker images and write to the chain by deploying a business network archive.

Architecture Overview
All Hyperledger projects follow a design philosophy that includes a modular extensible
approach, interoperability, an emphasis on highly secure solutions, a token-agnostic
approach with no native cryptocurrency, and the development of a rich and easy-touse
Application Programming Interface (API). The Hyperledger Architecture WG has
distinguished the following business blockchain components:
• Consensus Layer – Responsible for generating an agreement on the order and
confirming the correctness of the set of transactions that constitute a block.
• Smart Contract Layer – Responsible for processing transaction requests and
determining if transactions are valid by executing business logic.
• Communication Layer – Responsible for peer-to-peer message transport between
the nodes that participate in a shared ledger instance.
• Data Store Abstraction – Allows different data-stores to be used by other modules.
• Crypto Abstraction – Allows different crypto algorithms or modules to be swapped
out without affecting other modules.
• Identity Services – Enables the establishment of a root of trust during setup of a
blockchain instance, the enrollment and registration of identities or system entities
during network operation, and the management of changes like drops, adds, and
revocations. Also, provides authentication and authorization.
• Policy Services – Responsible for policy management of various policies specified
in the system, such as the endorsement policy, consensus policy, or group
management policy. It interfaces and depends on other modules to enforce the
various policies.
• APIs – Enables clients and applications to interface to blockchains.
• Interoperation – Supports the interoperation between different blockchain instances.

SOURCE: https://www.hyperledger.org/wp-content/uploads/2017/08/HyperLedger_Arch_WG_Paper_1_Consensus.pdf

Getting Started

*You can skip this section if you want to stand up Fabric for building the applications with composer, but, if you want to manually install and invoke chaincode this will section will go over that.

This is the Fabric Getting Started Guide and here is the end-to-end guide on Github

cd ubuntu
cd yourgodirectoryname
mkdir src
cd src
mkdir github.com
cd github.com
mkdir hyperledger
cd hyperledger
git clone https://github.com/hyperledger/fabric.git
sudo apt install libtool libltdl-dev

Download the Docker Images for Fabric v1.0

cd fabric
make release-all
make docker
docker images

You should see your docker images    (this is for x86_64-1.0.0-alpha2)

REPOSITORY TAG IMAGE ID CREATED SIZE
dev-peer0.org1.example.com-marbles-1.0 latest 73c7549744f3 6 days ago 176MB
hyperledger/fabric-couchdb latest 3d89ac4895f9 12 days ago 1.51GB
hyperledger/fabric-couchdb x86_64-1.0.0-alpha2 3d89ac4895f9 12 days ago 1.51GB
hyperledger/fabric-ca latest 86f4e4280690 12 days ago 241MB
hyperledger/fabric-ca x86_64-1.0.0-alpha2 86f4e4280690 12 days ago 241MB
hyperledger/fabric-kafka latest b77440c116b3 12 days ago 1.3GB
hyperledger/fabric-kafka x86_64-1.0.0-alpha2 b77440c116b3 12 days ago 1.3GB
hyperledger/fabric-zookeeper latest fb8ae6cea9bf 12 days ago 1.31GB
hyperledger/fabric-zookeeper x86_64-1.0.0-alpha2 fb8ae6cea9bf 12 days ago 1.31GB
hyperledger/fabric-orderer latest 9a63e8bac1f5 12 days ago 182MB
hyperledger/fabric-orderer x86_64-1.0.0-alpha2 9a63e8bac1f5 12 days ago 182MB
hyperledger/fabric-peer latest 23b4aedef57f 12 days ago 185MB
hyperledger/fabric-peer x86_64-1.0.0-alpha2 23b4aedef57f 12 days ago 185MB
hyperledger/fabric-javaenv latest a9ca2c90a6bf 12 days ago 1.43GB
hyperledger/fabric-javaenv x86_64-1.0.0-alpha2 a9ca2c90a6bf 12 days ago 1.43GB
hyperledger/fabric-ccenv latest c984ae2a1936 12 days ago 1.29GB
hyperledger/fabric-ccenv x86_64-1.0.0-alpha2 c984ae2a1936 12 days ago 1.29GB
hyperledger/fabric-baseos x86_64-0.3.0 c3a4cf3b3350 4 months ago 161MB

Create your network artifacts

cd examples
cd e2e_cli
./generateArtifacts.sh <channel-ID>

Make sure to choose a channel name for <channel-ID> ie mychannel

then the below to launch the network

CHANNEL_NAME=<channel-id> TIMEOUT=<pick_a_value> docker-compose -f docker-compose-cli.yaml up -d

or use this script to launch the network in one command.

./network_setup.sh up <channel-ID> <timeout-value>
docker ps

To manually run and call the network use

Open the docker-compose-cli.yaml file and comment out the command to run script.sh. Navigate down to the cli container and place a # to the left of the command. For example:

  working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
# command: /bin/bash -c './scripts/script.sh ${CHANNEL_NAME}; sleep $TIMEOUT'

Save the file and return to the /e2e_cli directory.

# Environment variables for PEER0
CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/peer/peer0/localMspConfig
CORE_PEER_ADDRESS=peer0:7051
CORE_PEER_LOCALMSPID="Org0MSP"
CORE_PEER_TLS_ROOTCERT_FILE=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/peer/peer0/localMspConfig/cacerts/peerOrg0.pem
docker exec -it cli bash
peer channel join -b yourchannelname.block
peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02
# remember to preface this command with the global environment variables for the appropriate peer
# remember to pass in the correct string for the -C argument.  The default is mychannel
peer chaincode instantiate -o orderer0:7050 --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem -C mychannel -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c '{"Args":["init","a", "100", "b","200"]}' -P "OR ('Org0MSP.member','Org1MSP.member')"
peer chaincode invoke -o orderer0:7050  --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem  -C mychannel -n mycc -c '{"Args":["invoke","a","b","10"]}'
peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'

You now should have a functioning network that you can call.

To Setup and use Couch DB for richer queries follow the tutorial here.

To clean and bring down the network use 

./network_setup.sh down
sudo docker ps

you shouldn’t see any containers running.

To stop individual various containers use

sudo docker stop <name>

sudo docker rm <name>

To learn about the Hyperledger Fabric API visit http://jimthematrix.github.io/ This is a great resource for learning about the intricacies of the network and the different certificates needed for trusted transactions.

Below is the second part the Application Layer, Fabric Composer.

The Application Layer

You have configured and setup a VM, you have your docker containers running, your Hyperledger Fabric V1.0 blockchain infrastructure is live; now you want to model, build and deploy a business application on this network. This article will show you exactly how to wire up the application layer and the infrastructure layer.

You can run the network locally using npm or docker

  npm install -g composer-playground
 docker run -d -p 8080:8080 hyperledger/composer-playground

Once you have configure your business network you can export it in a BNA (business network archive file).

The Business Network

Permissioned blockchain technology applications built for the enterprise and commercial applications need an abstraction layer. This is provided by Fabric Composer; a toolset and application framework that enables you to quickly model and deploy applications to Fabric infrastructure. It is a framework that enables you to reason about the Participants, the Assets, and the Transaction logic that drive state changes on your distributed ledger. This business logic and processes are what will drive the distributed state changes to the peers on your network.

Hyperledger Fabric Composer

Fabric Composer is an open-source project and part of the Hyperledger Foundation.

First, ssh into the same VM you setup.

ssh -i ./yourkeyname.pem ubuntu@yoururlname

Installing Fabric Composer Dev Tools

You should have the majority of these tools installed in your configured VM but this script will make sure everything is correct versions and you haven’t missed anything.

curl -O https://hyperledger.github.io/composer/prereqs-ubuntu.sh

chmod u+x prereqs-ubuntu.sh
./prereqs-ubuntu.sh
  1. To install composer-cli run the following command:
    npm install -g composer-cli
    

    The composer-cli contains all the command line operations for developing business networks.

  2. To install generator-hyperledger-composer run the following command:
    npm install -g generator-hyperledger-composer
    

    The generator-hyperledger-composer is a Yeoman plugin that creates bespoke applications for your business network.

  3. To install composer-rest-server run the following command:
    npm install -g composer-rest-server
    

    The composer-rest-server uses the Hyperledger Composer LoopBack Connector to connect to a business network, extract the models and then present a page containing the REST APIs that have been generated for the model.

  4. To install Yeoman run the following command:
    npm install -g yo
    

    Yeoman is a tool for generating applications. When combined with the generator-hyperledger-composer component, it can interpret business networks and generate applications based on them.

If you use VSCode, install the Hyperledger Composer VSCode plugin from the VSCode marketplace. There is also a plugin for Atom as well.

Make sure you have the V1 images, if you have any old ones use:

sudo docker rmi <image> --force
sudo docker container prune

Fabric Tools 

This is an important library that will allow you to quickly spin up and teardown Fabric infrastructure. 

mkdir ~/fabric-tools && cd ~/fabric-tools

curl -O https://raw.githubusercontent.com/hyperledger/composer-tools/master/packages/fabric-dev-servers/fabric-dev-servers.zip
sudo apt install unzip
unzip fabric-dev-servers.zip
export FABRIC_VERSION=hlfv1

To download Hyperledger-Fabric use the script below:

cd ~/fabric-tools
./downloadFabric.sh
sudo docker images
hyperledger/fabric-ca x86_64-1.0.1 5f30bda5f7ee 2 weeks ago 238MB
hyperledger/fabric-couchdb x86_64-1.0.1 dd645e1e92c7 2 weeks ago 1.48GB
hyperledger/fabric-orderer x86_64-1.0.1 bbf2708c9487 2 weeks ago 179MB
hyperledger/fabric-peer x86_64-1.0.1 abb05def5cfb 2 weeks ago 182MB
hyperledger/fabric-ccenv x86_64-1.0.1 7e2019cf8174 2 weeks ago 1.29GB

You can start Hyperledger Fabric using this script:

./startFabric.sh

if you run:

sudo docker ps

You should see your network is up and running

Create and Connect your Hyperledger Composer Profile

Hyperledger Fabric is distinguished as a platform for permissioned networks, where all participants have known identities.

UPDATE: ID Cards

The below line should be a huge ah-hah moment.

**********An ID Card contains an Identity for a single Participant within a deployed business network. ************

You have defined a Participant in your business network (modeled, but also actually created a record of one) given the participant an ID (id, name email) and now; the above. The ID Card contains an Identity for that single Participant within the deployed business network.

It is like your auth method. You create the participant on the network and then you can go and create an ID that lets you sign as as that participant. Then create another participant, maybe same type, different type; then create another ID for that one.

*** Command Line – Create Profile ***

Issue from the fabric-tools directory ./createComposerProfile.sh

This script will create an PeerAmin Profile for you.

Connection profiles are a bit confusing and can be frustrating to setup, but, this is an integral part to being able to build and deploy your business network.

There are two folders on your machine:

cd ~/.composer-credentials

and

cd ~/.composer-connection-profiles

When you run Composer locally using Docker containers the Profiles you create will be stored there.

You can find you and PeerAdmin Profile pub, priv keys in the composer-credentials directory.

You then have to import you Composer Profile using this command:

$ composer identity import -p hlfv1 -u PeerAdmin -k
/c/Users/domst/.composer-credentials/114aab0e76bf0c78308f89efc4b8c9423e31568da0c340ca187a9b17aa9a4457-priv
 -c /c/Users/domst/.composer-credentials/114aab0e76bf0c78308f89efc4b8c9423e31568da0c340ca187a9b17aa9a4457-pub

OR

composer identity import -p hlfv1 -u PeerAdmin -c ${HOME}/.composer-credentials/114aab0e76bf0c78308f89efc4b8c9423e31568da0c340ca187a9b17aa9a4457-pub -k ${HOME}/.composer-credentials/114aab0e76bf0c78308f89efc4b8c9423e31568da0c340ca187a9b17aa9a4457-priv

You should get a Command Successful Message.

You may run into a problem command if your setup is still sudoed out, so you will need to check and see using

ls -la /home/ubuntu/.composer-credentials

Then if it is run

sudo chown -R ubuntu /home/ubuntu/.composer-credentials

Network Teardown

Also, here are some more scripts more to stop and tear down the infrastructure layer.

./stopFabric.sh

At this point you can go should be ready to get dialed in and ready to model out and wire up a business network. Grab some coffee, a nice glass of water, new piece of gum; you’re just about to get going. This next part we are going to connect a Sample Business Network to your Hyperledger Fabric V1 Blockchain.

To start over do this

./teardownFabric.sh

Or continue On… to connect a Sample Business Network to your Hyperledger Fabric V1 Blockchain.

Building Your Business Network

This section isn’t mandatory but if you want to use the playground editor this is some background on how to to access it in the browser Or you can skip this and use vim.

Make sure fabric composer is running, your security groups have inbounds and outbounds open and go to your amazon web services url:

then go to the extension for composer editor.

http://yourURL.compute.amazonaws.com:8080/editor

Model your Business Network using the Hyperledger Composer Playground. A Hyperledger Composer Business Network Definition is composed of a set of model files and a set of scripts.  This can be run in the browser or locally using a docker container.

Modeling a Business Network consist of :

  • Participants – Members of the Business Network
  • Assets –
  • Transactions – State change mechanism of the Network

You also are able to use:

  • Concepts
  • enum
  • abstract

Lastly, the Business Network can define:

  • Events – Defined indicators that can be subscribed to by external systems
  • Access Control – Permissions for state changes

You can use Javascript to define the transactional logic that you would like to use in your applications. We are just going to use a Sample Business Network, this can be edited and redeployed to update the Blockchain.

Clone a sample network using this repo:

git clone https://github.com/hyperledger/composer-sample-networks.git
cd packages

cd basic-sample-network

npm install

You should get something like this:

Creating Business Network Archive

Looking for package.json of Business Network Definition
Input directory: /home/ubuntu/my-network

Found:
Description: The Hello World of Hyperledger Composer samples
Name: my-network
Identifier: my-network@0.1.8

Written Business Network Definition Archive file to
Output file: ./dist/my-network.bna

Command succeeded

Deploying Your Business Network To Your Blockchain Infrastructure

Once you have your Sample Business Network you are going to want to create a BNA file. A BNA is the Business Network Archive, it is a file that describes your network configuration and application and can be deployed the the infrastructure you have setup. To deploy your network use:

composer network deploy -a my-network.bna -p hlfv1 -i PeerAdmin -s randomString

Setting up our REST Server

Hyperledger Fabric v1.0 provides basic API using Protocol Buffers over gRPC for applications to interact with the blockchain network. Composer enables you to create REST server and communicate with the Blockchain network using JSON. This part is important and you should have an understanding of it before starting on modeling out your business network / application. Composer enables you to create a REST server that is dynamically generated based on the business network participants and asset you design. You can configure your network and launch a REST Api that can be called from other applications.

composer-rest-server
Once your business Network is deployed to your infrastructure you should be able to verify it using the top right, your connection profile is connected. You are now ready to send transactions to the blockchain directly from the composer interface or by calling your REST server from a client.

Updated: Wiring It All Up

I am introducing this third section as a lot of things have been updated an added into building Hyperledger Blockchain Apps.

I want to make this section succinct but also talk about what feels like the ideal setup for building these types of applications. I have been building on this framework for roughly a year and have come across a lot of the gotchas and I think that this may be the section that will hopefully iron out getting these business networks into working systems that are fluid, easy to update and maintain, scalable and valuable.

1) Set up your terminal correctly using Byobu

This will save you a lot of time when you are splitting screens and also persisting sessions for the composer-rest-server, node, the angular app.

Install Byobu 

Split horizontal with Shift F2

Split Vertical with Ctl F2

Move Windows with Shift F3

Zoom into a window with Shift F11

This will save you a lot of time and make it much easier to navigate the different components of the business network

2) Install Node-Red and the Hyperledger Composer Template/ Plugins for Node-Red

You can use these for a number of different integrations but combined with the Pub-Sub Event System on Composer you can start to build some cool things

3) Configure your Angular App

There are a few things you will need to edit here in order for the Angular App to work in the VM.

Make sure you have Yo installed

npm install -g generator-hyperledger-composer
yo hyperledger-compsoser

There are a few things you will have to do in order for the app to work

Navigate to the app directory

cd <appname>

cd src

cd app

vi configuration.ts

Change the  public ApiIP: string = to the URL of your VM

Esc : x

Next,

cd <appname>

cd node_modules/@angular/cli/tasks

vi serve.js

add this:

webpackDevServerConfiguration.disableHostCheck = true;

line to the file where indicated in the screenshot below:

2018-02-03 09_11_34-2018-01-03 13_44_37-.png ‎- Photos

Lastly,

cd <appname>

vi package.json

change the start script to:

“start”: “ng serve –host=0.0.0.0 –port 4200”,

2018-02-03 09_15_37-ubuntu@ip-172-31-24-174 () - byobu

Now make sure your composer rest server is running and start the angular app and it will be available at your VM URL : 4200

cd <appname>

npm start

4) Manually Creating and Managing Cards

Cards are what allow you to authenticate and make state changes as….

In a sense managing the cards (for development purposes) allows you to identify who is it that made the update to the business network.

There are a few commands you will want to use.

composer card list

composer card delete -n name

composer card import -f

composer card create

The way I have it setup is I created an admin directory and imported the certs to make the card.

First you have to create a connection.json file

{

"name": "fabric-network",
 "type": "hlfv1",
 "mspID": "Org1MSP",
 "peers": [
 {
 "requestURL": "grpc://localhost:7051",
 "eventURL": "grpc://localhost:7053"
 }
 ],
 "ca": {
 "url": "http://localhost:7054",
 "name": "ca.org1.example.com"
 },
 "orderers": [
 {
 "url" : "grpc://localhost:7050"
 }
 ],
 "channel": "composerchannel",
 "timeout": 300
}

I then will create the folders with the certs and keys in them and bring them into the same directory.

You will need to navigate to where the certs are and copy them into your folder or if you just want to use the ones we have been using in the tutorial.

vi 114a

—–BEGIN PRIVATE KEY—–
MIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQg00IwLLBKoi/9ikb6
ZOAV0S1XeNGWllvlFDeczRKQn2uhRANCAARrvCsQUNRpMUkzFaC7+zV4mClo+beg
4VkUyQR5y5Fle5UVH2GigChWnUoouTO2e2acA/DUuyLDHT0emeBMhoMC
—–END PRIVATE KEY—–

vi Admin@org1.example.com-cert.pem

—–BEGIN CERTIFICATE—–
MIICGjCCAcCgAwIBAgIRANuOnVN+yd/BGyoX7ioEklQwCgYIKoZIzj0EAwIwczEL
MAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNhbiBG
cmFuY2lzY28xGTAXBgNVBAoTEG9yZzEuZXhhbXBsZS5jb20xHDAaBgNVBAMTE2Nh
Lm9yZzEuZXhhbXBsZS5jb20wHhcNMTcwNjI2MTI0OTI2WhcNMjcwNjI0MTI0OTI2
WjBbMQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMN
U2FuIEZyYW5jaXNjbzEfMB0GA1UEAwwWQWRtaW5Ab3JnMS5leGFtcGxlLmNvbTBZ
MBMGByqGSM49AgEGCCqGSM49AwEHA0IABGu8KxBQ1GkxSTMVoLv7NXiYKWj5t6Dh
WRTJBHnLkWV7lRUfYaKAKFadSii5M7Z7ZpwD8NS7IsMdPR6Z4EyGgwKjTTBLMA4G
A1UdDwEB/wQEAwIHgDAMBgNVHRMBAf8EAjAAMCsGA1UdIwQkMCKAIBmrZau7BIB9
rRLkwKmqpmSecIaOOr0CF6Mi2J5H4aauMAoGCCqGSM49BAMCA0gAMEUCIQC4sKQ6
CEgqbTYe48az95W9/hnZ+7DI5eSnWUwV9vCd/gIgS5K6omNJydoFoEpaEIwM97uS
XVMHPa0iyC497vdNURA=
—–END CERTIFICATE—–

For example:

Lastly Run:

 composer card create -p connection.json -u PeerAdmin -c Admin@org1.example.com-cert.pem -k 114 -r PeerAdmin -r ChannelAdmin

Which will create

PeerAdmin@fabric-network.card

5) Install Start and Deploy

cd my-network

cd dist

composer runtime install -c PeerAdmin@fabric-network -n <network name>

composer network start -c PeerAdmin@fabric-network -a <name>.bna -A admin -S adminpw

composer card import -f admin@<name>.card

Composer card list and you will see the card you created linked to the deployed business network and lastly:

run composer-rest-server and you will use the Card.

6) Linking up Composer Playground

This will probably save you the most time when developing, updating and deploying the business network to the Fabric instance.

npm install -g composer playground

Start up composer playground and connect using the Card you created in step 4.

If the container for your business network is deployed you should be able to make changes in the composer playground browser and when you update it will update the business network deployed to fabric in real-time.

ALOT better then copying and editing the lib or model files and recompiling a BNA file at the command line.

Also once you start creating and issuing cards from composer playground you will see them available using

composer card list

7) Blockchain Explorer

Composer Playground both run on 8080 so you will need to choose or change the config.

Make sure to export your BNA file because sometime when composer updates the browser will need to clear the cache and you will lose your business network build.

Will work on Blockchain Explorer but note that it is an Explorer for Fabric but not Composer ie it will not show you the Participants, Assets but will show infra level txns, peers, chaincode etc.

If you have any questions or are interested in building more blockchain applications using Fabric and Composer reach out to me at dominic@dapps.ai

-Dom

Versions:

https://gateway.ipfs.io/ipfs/QmXMaNJRGstyib8iAZCZ5HxShorCxCDFDzeG2Hx73hTYcp

5.26 – using v1.0

https://gateway.ipfs.io/ipfs/QmWtaoMGA8SVc7kdjUnRP5hS1KsX51TEYH57VNTPJWQPFh

5.27 – using v1.0 alpha2 docker images and added in fabric composer (current)

https://gateway.ipfs.io/ipfs/Qme6FT3HQzUsY8z69P6KCutaUogmc4BBxCSPJDt3czaM6M

 

IPLD Resolvers

This was in my drafts from a few months back, right when I got really excited about IPFS, content addressing data, and the potential future applications that will be built on the protocol.

Content addressing is a true computational advancement in the way that we think about adding and retrieve content on the web. We can take existing databases and use the various parts of the IPFS protocol to build clusters of nodes that are serving content in the form of IPLD structures.

IPLD enables futureproof, immutable, secure graphs of data the essentially are giants Merkle Trees. The advantages of converting a relational database or key value db into a merkle tree are endless.

The data and named links gives the collection of IPFS objects the structure of a Merkle DAG — DAG meaning Directed Acyclic Graph, and Merkle to signify that this is a cryptographically authenticated data structure that uses cryptographic hashes to address content.

This protocol enables an entirely new way to search and retrieve data. It is no longer about where a file is located on a website. It is now about what exact piece of data you are looking for. If I send you an email with a link in it and 30 days later you click the link, how can you be certain that the data you are looking at is the same as what I original sent you? You can’t.

With IPLD you can use content addressing to know for certain that a piece of content has not changed. You can can traverse the IPLD object and seamlessly pick out piece of the data. By using IPLD once that data is locally cached you can use the application offline.

You can work on an application with other peers around you as well using a shared state. But why is this is important for businesses?

Content addressing for companies will ensure a number of open standards. We can now take fully encrypted private networks that are content addressed.

This is a future proof system. Hashes that are currently used in databased can be broken but now we have multi-hashing protocols.

We can build blockchains that use IPLD and libp2p.

The IPLD resolver uses two main pieces; the bitswap and the blocks-service.

Bitswap is transferring blocks and blocks-services is determining what needs to be fetched based on what is currently in the local cache and what needs to be fetched. This prevents duplication and increase efficiencies in the system.

We will be creating a resolver for the enterprise that enables them to take their existing database tables and convert them into giant merkle trees that are interoperable. IPLD is like a global adapter for cryptographic protocols.

Creating the Enterprise Forrest

Here is where it gets interesting. The enterprise uses a few different types of database; relational SQL databases, noSQL distributed databases.

Salesforce is a another type of database that we can take and convert into a Merkle Tree.

S3, is another type of database,

I call this tables to trees.

We are going to take systems that are onPrem siloed or centralized to a cloud provider and turn them into merkle trees using IPLD content addressing.

The IPLD resolver is an internal DAG API module:

We can create a plug and play system of resolvers. This is where a company can take their existing relational database and keep it.

We will resolve the database and run a blockchain in parallel. This blockchain will be built using two protocols that are from the IPFS project: ipld and libp2p

The IPLD Resolver for the enterprise will consist of

.put

.get

.remove

.support.add

.support.rm

We will take any enterprise database and build out the content addressed merkle tree on it.

Content Addressing enterprise content on IPFS-Clusters

Enterprise can consist of 10,000 nodes 50,000 nodes 100,000 nodes and the IPLD object has to be under 1 MB.

That all will be hosting the merkle tree mirror of the relational database.

This can also enable offline operation for the network. Essentially they have their own protocol that is mirroring their on-premise system.

We will be starting with dag-mySQL, dag-noSQL, dag-apex

The MySQL hash function that exist on the on premise system stays when implemented. If that hash is ever broken there is no way to upgrade the system with up completely migrating it.

This makes data migration a lot easier or not even necessary in the future. Once the data is content addressed and creates the merkle tree, we can then start traversing the data.

We will also build interfaces that can interact with the IPLD data

IPLD is a format that can enable version control. The resolver will essentially take any database, any enterprise implementation, and convert it into a merkle tree.

We are essentially planting the seeds (product) and watering them (services). Once these trees are all in place that can communicating because they are all using the same data format.

Future Proofing your Business

We are creating distributed, authenticated, hash-linked data structures. IPLD is a common hash-chain format for distributed data structures.

Each node in these merkle trees will have a unique Content Identified – a format for these hash-links.

This is a database agnostic path notation any hash – any data format.

This will have a multihash – multiple cryptographic hashes, multicodec – multiple serialization formats, multibase – multiple base encodings 

Again, why is the important for businesses? The most important is transparency and security, this is a tamper proof, tamper evident database that can be shared, traversed, replicated, distributed, cached, encrypted and you know now exactly WHAT you are linking to, not where. You know which hash function to verify with. You know what base it is in.

This new protocol enables cryptographic systems to interoperate over a p2p network that serves hash linked data.

IPFS 0.4.5 includes that dag command that can be used to traverse IPLD objects.

Now to write the dag-sql resolver.

Take any existing relational database and you can now traverse that database content addressing.

Content Addressing your database to a cluster of IPFS Cluster nodes on a private encrypted network.

Deterministic head of the cluster then writes new entries.

We use the Ethereum network to assign a key pair for your users to leverage the mobile interface. You can sign in via fingerprint or by facial recognition using the Microsoft Cognitive Toolkit. Your database will run in parallel , you will keep your on premise system and have a content addressed. Content Addressing a filesystem or any type of database creates a Merkle DAG. With this Merkle DAG we can format your data in a way that is secure, immutable, tamper proof, futureproof and able to communicate with other underlying network protocols and apllication stacks. We can effectively create a blockchain network out of your exisiting database the runs securely on a cluster of p2p nodes. I am planting business merkle dag seeds in the merkle forrest. Patches of these forrest will be able to communicate with other protocols via and hash and in any format.

This is the way that the internet will work going into the future. A purely decentralized web of business trees.

 

Continued:

On Graph Data Types

DAG:

In a graph model, each vertex consists of:

  • A unique identifier
  • a set of outgoing edges
  • a set of incoming edges
  • a collection of properties (key-value pairs)

Each edge consists of:

  • A unique identifier
  • The vertex at which the edge starts (the tail vertex)
  • The vertex at which the edge ends ( the head vetex)
  • A label to describe the kind of relationahship between the two vertices
  • A collection of properties (key-value pairs)

Important aspects of the model:

Any vertex can have an edge connecting it with an other vertex. There is no schema that restricts which kinds of things can or cannot be associated.

Given an vertex, you can efficiently find both its incoming and its outgoing edges, and thus traverse the graph – ie. follow a a path through a chain of vertices- both forward and backward. ie ie why you can traverse a hashed blockchain with a resolver.

By using different lables for different kinds of relationships, you can store several different kinds of information in a single graph, while still maintaining a clean data model.

 

 

Web3, Solc, IPFS and IPLD

Web3: JavaScript API for interacting with the Ethereum Virtual Machine

Solidity: Smart Contract Programming Language

IPFS: Distributed File System

This is a quick introduction to calling the Ethereum Virtual Machine using the web3 API, compiling Solidity Smart Contracts, and traversing content addressed data structures on the Interplanetary File System.

These are some of the core technologies that will be used to build  Ðapps.

npm install web3 –save

npm install solc –save

npm install ipfs-api –save

The Ethereum JS Util Library- “ethereumjs-util”: “4.5.0”

The Ethereum JS Transaction Library – “ethereumjs-tx”: “1.1.2”

The first thing we need to do is get testrpc up and running. Depending on the type of machine you have this could be rather straight forward or it may take a while. The link below should point you in the right direction.

Configure testrpc

Sending Transactions

Once your test Ethereum Node is running, instantiate a new web3 object. This can be done by doing the following:

Start up node in the console:

var Web3 = require(“web3”)

var web3 = new Web3(new Web3.providers.HttpProvider(http://localhost:8545))

We can now call different web3 APIs. Test out the connection to TESTRPC by calling:

web3.eth.accounts

You will see all 10 accounts generated from the TESTRPC.

web3.eth.accounts[0]

You will see the address for the first address generated from the TESTRPC.

web3.eth.getBalance(web3.eth.accounts[0])

You should be returned the balance address from your first TESTRPC address.

Converting from Wei to Ether

web3.fromWei(web3.eth.getBalance(web3.eth.accounts[0]), ‘ether’).toNumber()

var acct1 = web3.eth.accounts[0]

var acct 2 = web3.eth.accounts[1]
var balance = (acct) => { return web3.fromWei(web3.eth.getBalance(acct), ‘ether’).toNumber() }
balance(acct1)

balance(acct2)

Send an Ethereum Transaction

web3.eth.sendTransaction({from: acct1, to: acct2, value: web3.toWei(1, ‘ether’), gasLimit: 21000, gasPrice: 2000000000 nonce: })

Send a Raw Ethereum Transaction

var EthTx = require(“ethereumjs-tx”)

var pKey1x = new Buffer(pKey1, ‘hex’)
pkey1x

var rawTx = {

nonce: web3.toHex(web3.eth.getTransactionCount(acct1)),

to: acct2,

gasPrice: web3.toHex(2000000000),

gasLimit: web3.toHex(21000),

value: web3.toHex(web3.toWei(25, ‘ether’)),

data: “”}

var tx = new Ethtx(rawTc0)

tx.sign(pKey1x)

tx.serialize().toString(‘hex’)

web3.eth.sendRawTransaction(Ox${tx.serialize().toString(‘hex’)}`, (error, data) => {

if(!error) { console.log(data) }

})

Creating Smart Contracts

var Web3 = require(“web3”)

var solc = require(“solc”)

var web3 = new Web3 (new Web3.providers.HttpProvider(http://localhost:8545))

Create a source variable with your Solidity code: (This can be any smart contract code, unless you are trying to import in another contract at the top).

var source = `{ contract Messenger {

function displayMessage() constant returns (string) {

return "This is the message in the smart contract";

}

}`
var compiled = solc.compile(source)
source

You can then start to unpack the different parts of the compiled contract:

compiled.contracts.Messenger

compiled.contracts.Messenger.bytecode

compiled.contracts.Messenger.opcodes

compiled.contracts.Messenger.interface
var abi = JSON.parse(compiled.contracts.Messenger.interface)

In order to deploy the contract to the network you will need the JSON Interface of the Contract, the abi (application binary interface).

var messengerContract = web3.eth.contract(abi)
messengerContract
var deployed = messengerContract.new({

... from: web3.eth.accounts[0],

... data: compiled.contracts.Messenger.bytecode,

... gas: 470000,

... gasPrice: 5,

... }, (error, contract) => { })

You should now see in the testrpc the transaction broadcasted to the network.

Web3.eth.getTransaction(“0x0”)

Call a Function in the Contract

deployed.displayMessage.call()

IPFS  – Hashlink Technology

IPFS defines a new paradigm in the way that we can organize, traverse, and retrieve data using a p2p network that serves hash linked data. This is done by using merkle-links between files that are distributed on the interplanetary file system.

2017-02-14 07_31_46-ipfs_js-ipfs_ IPFS implementation in JavaScript.png

Content-addressing enables unique identifies and unique links between data that lives on the distributed network. It creates distributed, authenticated, hash-linked data structures. This is achieved by Merkle-Links and Merkle-Paths.

A merkle-link is a link between two objects which is content-addressed with the cryptographic hash of the target object, and embedded in the source object. This is the way  the Bitcoin and Ethereum blockchains work, they are both essentially giant merkle trees; one with blocks of ordered transactions one with computational operations driving state changes.

IPLD – Interplantary Linked Data 

IPLD is a common hash-chain format for distributed data structures. This creates a database agnostic path notation.

any hash –> any data format

This enables cryptographic integrity checking and immutable data structures. Some of the properties of this are longterm archival, versioning, and distributed  mutable state.

It shifts the way you think about fetching content. Content addressing is What vs Where.

A “link” as represented as a JSON object is comprised of the link and the link value:

{ “/” :        “ipfs/0x0 “ }

Link Key             Link Value

Other properties of the link can be defined in the json object as well.

What if we want to create a more dynamic link object. This can be achieved by using a merkle-path.

A Merkle is a unix-style path which initially dereferences through a merkle-link and allows access of elements of the referenced node and other nodes transitively.

This means that you can design an object model on top of IPLD that would be specialized for file manipulation and have specific path algorithms to query this model.

This would look like:

/ipfs/0x0/a/b/c/d

The protocol the hash of the linked object and the traversal

Here are some examples of traversals with the the link JSON object.

2017-01-30-01_30_48-specs_ipld-at-master-%c2%b7-ipld_specs2017-01-30-01_32_52-zoomit-zoom-window2017-01-30-01_33_40-zoomit-zoom-window2017-01-30-01_34_50-zoomit-zoom-window2017-01-30-01_36_18-zoomit-zoom-window2017-01-30-01_37_51-zoomit-zoom-window

2017-02-21-06_22_46-zoomit-zoom-window

This is all essentially a JSON structure for traversing files, navigating through the IPLD object, walking along the ipfs hash to pull arbitrary data that is nested in these data structures.

CID: Content Identifier – format for hash-links

Multihash –  multiple cryptographic hashes

Multicodec  – multiple serialization formats

Multibase – multiple base encodings

2017-02-21 06_09_27-Juan Benet_ Enter the Merkle Forest - YouTube

This is very powerful because we can use Content Identifiers to traverse different crytographic protcols: Bitcoin, Ethereum, ZCash, Git.

We could also link from crypto to Salesforce or a relational DB by using content addressing.

The paths between these now disparate systems can be resolved by using this uniform, immutable, distributed file system.dsContent addressing is a true computational advancement in the way that we think about adding and retrieve content on the web. We can take existing databases and use the various parts of the IPFS protocol to build clusters of nodes that are serving content in the form of IPLD structures.

IPLD enables futureproof, immutable, secure graphs of data the essentially are giants Merkle Trees. The advantages of converting a relational database or key value db into a merkle tree are endless.

The data and named links gives the collection of IPFS objects the structure of a Merkle DAG — DAG meaning Directed Acyclic Graph, and Merkle to signify that this is a cryptographically authenticated data structure that uses cryptographic hashes to address content.

This protocol enables an entirely new way to search and retrieve data. It is no longer about where a file is located on a website. It is now about what exact piece of data you are looking for. If I send you an email with a link in it and 30 days later you click the link, how can you be certain that the data you are looking at is the same as what I original sent you? You can’t.

With IPLD you can use content addressing to know for certain that a piece of content has not changed. You can can traverse the IPLD object and seamlessly pick out piece of the data. By using IPLD once that data is locally cached you can use the application offline.

You can work on an application with other peers around you as well using a shared state. But why is this is important for businesses?

Content addressing for companies will ensure a number of open standards. We can now take fully encrypted private networks that are content addressed.

This is a future proof system. Hashes that are currently used in databased can be broken but now we have multi-hashing protocols.

We can build blockchains that use IPLD and libp2p.

The IPLD resolver uses two main pieces; the bitswap and the blocks-service.

Bitswap is transferring blocks and blocks-services is determining what needs to be fetched based on what is currently in the local cache and what needs to be fetched. This prevents duplication and increase efficiencies in the system.

We will be creating a resolver for the enterprise that enables them to take their existing noSQL key value and convert them into giant merkle trees that are interoperable. IPLD is like a global adapter for cryptographic protocols.

2017-02-23 04_34_00-Juan Benet_ Enter the Merkle Forest - YouTube

Creating the Enterprise Forrest

We can create a plug and play system of resolvers for IPLD.

We will resolve the database and run a blockchain in parallel. This blockchain will be built using two protocols that are from the IPFS project: ipld and libp2p

The IPLD Resolver for the enterprise will consist of

.put

.get

.remove

.support.add

.support.rm

We will take any enterprise noSQL database (as long as there is a key value store) and build out the content addressed merkle tree on it.

Content Addressing enterprise content on IPFS-Clusters

Enterprise can consist of 10,000 nodes 50,000 nodes 100,000 nodes and the IPLD object has to be under 1 MB.

That all will be hosting the merkle tree mirror of the relational database.

This can also enable offline operation for the network. Essentially they have their own protocol that is mirroring their on-premise system.

This makes data migration a lot easier or not even necessary in the future. Once the data is content addressed and creates the merkle tree, we can then start traversing the data.

We will also build interfaces that can interact with the IPLD data

IPLD is a format that can enable version control. The resolver will essentially take any database, any enterprise implementation, and convert it into a merkle tree.

We are essentially planting the seeds (product) and watering them (services). Once these trees are all in place that can communicating because they are all using the same data format.

Future Proofing your Business

We are creating distributed, authenticated, hash-linked data structures. IPLD is a common hash-chain format for distributed data structures.

Each node in these merkle trees will have a unique Content Identified – a format for these hash-links.

This is a database agnostic path notation any hash – any data format.

This will have a multihash – multiple cryptographic hashes, multicodec – multiple serialization formats, multibase – multiple base encodings

2017-02-21 06_07_30-Juan Benet_ Enter the Merkle Forest - YouTube

Again, why is the important for businesses? The most important is transparency and security, this is a tamper proof, tamper evident database that can be shared, traversed, replicated, distributed, cached, encrypted and you know now exactly WHAT you are linking to, not where. You know which hash function to verify with. You know what base it is in.

This new protocol enables cryptographic systems to interoperate over a p2p network that serves hash linked data.

IPFS 0.4.5 includes that dag command that can be used to traverse IPLD objects.

Now to write the dag-sql resolver.

Take any existing relational database and you can now traverse that database content addressing.

Content Addressing your database to a cluster of IPFS Cluster nodes on a private encrypted network.

Deterministic head of the cluster then writes new entries.

2017-02-21 06_25_06-Juan Benet_ Enter the Merkle Forest - YouTube

We use the Ethereum network to assign a key pair for your users to leverage the mobile interface. You can sign in via fingerprint or by facial recognition using the Microsoft Cognitive Toolkit. Your database will run in parallel , you will keep your on premise system and have a content addressed. Content Addressing a filesystem or any type of database creates a Merkle DAG. With this Merkle DAG we can format your data in a way that is secure, immutable, tamper proof, futureproof and able to communicate with other underlying network protocols and apllication stacks. We can effectively create a blockchain network out of your exisiting database the runs securely on a cluster of p2p nodes. This is an IPFS-Cluster. Here are the commands that can add / remove CIDS from peers in the cluster:

 

2017-04-30 10_25_12-ipfs_ipfs-cluster_ Collective pinning and composition for IPFS.

2017-05-02 02_29_29-Zoomit Zoom Window

 

 

 

I am planting business merkle dag seeds in the merkle forrest. Patches of these forrest will be able to communicate with other protocols via and hash and in any format.

This is the way that the internet will work going into the future. A purely decentralized web of business trees.

 

2017-04-17 05_10_20-https___protocol.ai_img_projects_logotype_IPLD.svg