Thoughts on Bitcoin

Bitcoin is.

The security of bitcoin as a protocol is derived from the quantum properties it uses as a secure informational state machine. The state transitions can be trusted based on pure mathematics.

  • Elliptical Curves
  • Digital Signatures
  • Proof-of-Work

Bitcoin as a system is the most powerful machine / computer in the world.

It has already achieved this in a quick almost 10 years.

What makes it incredibly valuable with respect to the concept of money is that it is fairly easy to write to the the most powerful machine in the world.

State is live.

Transactions that are submitted are picked up by 98% of the hashing power of the machine.

It is a universal singleton.

By definition it is information creation in the purest form.

It is quantum in that a bitcoin address exist in a digital layer thats function is created on a spent output and used on input.

That act of transferring a bitcoin is that which creates the value.

No digital file is transferred.

Bitcoin is simply a namespace for the act of quantum transactional state verification.

It is a wave until that act of spending makes it material. It is untangible yet verdical.

Consensus is secured by gravity and energy.

There is a distribution of energy that is secured by the ability to call and announce state change verification. It’s inputs and outputs on the network.

It is a graph database d = 1.32

Hashlink:

ipfs/QmZHh2KKXTFtdmk5ybV2hmjXZvEpmyXEvj11DnNoguukmF

Interoperable, Scalable Blockchain Protocols

Last week I attended the Wanxiang Global Blockchain Summit in Shanghai. There were a number of sessions but one that really stuck with me was about potential scaling solutions for distributed consensus protocols.

There is a concept of polyglot persistence which essentially means certain databases are meant for certain reasons. You have SQL database, noSQL databases, graph databases, blockchains, distributed fault-tolerant consensus ledgers, CRDTS; all of these different types of protocols. They have different consensus mechanisms, data models, query languages, state change and message ordering models such as validators or orderers; but in order to effectively scale they need to be to interoperate and need to be able to complement each other.

They will need to be able to share account state. They will need to be able to resolve any concurrency issues that arise. They will need to be able to resolve replication and eventually be consistent.

Most of my time is spent working at the application layer but an assumption that I have as that building solutions that will incorporate multiple protocols will be important in the future.

This type of approach is not necessarily found in the blockchain space specifically because of the economic/personal incentives one has if heavily invested in a specific protocol. This could be time learning Solidity versus Kotlin; this could be owning a bunch of Ether vs NEO; this could be working at a company that is heavily contributing to a particular open project.

Regardless of ones preference in blockchain/distributed ledger/consensus database; this past trip to Shanghai validated that taking a multi-blockchain approach and being open to learning the differences between them and how they could also complement each other is going to have a great effect on the way that solutions in the future will be built.

An example of this is the new Ethermint Token.

Ethereum and Tendermint

In what is being called the Shanghai Accord; a new partnership between the two projects was announced in which they will work together to help scale blockchain technology. Tendermint a Proof of Stake consensus protocol and its new Internet of Blockchains Hub / project Cosmos is working closely with Ethereum project to essentially create derivate tokens on Tendermint infrastrucutre in what is being called a hard spoon. Essentially a new coin is minted on another chain (tendermint) that uses the account state or balances of an existing token on a chain (ethereum); and we have Ethermint. This could mean in an effort to provide a scalable cross-blockchain solution in which tokens now have shared state/accountbalances on the Ethereum Chain and the Tendermint Chain.

Content-Addressed Blockchain Resolvers

This is but one approach to have inter-operable blockchains. Other approaches have been proposed historically such as pegged side chains but one to me that seems very interesting is leveraging IPFS at the network the content address and resolve through the different blockchain / databases (as long as there is a key value store) using multihash and the ipld thinwaist protocol. It is still much just concept but again it is aligned with the idea of having multiple chains interoperating to address some of the issues that blockchains are facing today.

Plasma

Another approach is using Plasma; this allows transactions to occur offchain; only broadcasting them to the chain if there is a transaction dispute; you never sent me the money, vice versa). In this way the blockchain becomes an arbiter or a distributed court house; a way to have veridical computing if there is in fact a need for the chain based settlement.

Proof-of-Stake

Another take away  and interesting concept is the movement to proof-of-stake. Ethereum’s movement towards POS is in Vlad Zamfir’s Casper protocol. Other chains such as Lisk uses a dPOS system and Tendermint is on POS as well. A common theme between them is the concept of weight. The stake and the weight of stake has some very interesting implications on the security and threat models that could be introduced to the protocol. More on Proof-of-Stake/Casper in another post.

There have been a few recent cross chain atomic swaps recently aswell.

Litecoin + Decred

https://blog.decred.org/2017/09/20/On-Chain-Atomic-Swaps/

And cross chain transaction component verifications.

ZKSnarks + Ethereum

The Byzantium hard fork went live on Ethereum’s Ropsten testnet at block 1,700,000; one of the features was support for ZKSnarks. On the Ethereum ropsten testnet a a zksnarks component of the Zcash transaction was verified.

Public Chains and Financial Services

Lastly from Nick Szabo’s talk on Smart Contracts the idea of a smart contracts acting a digital distributed vending machine was very interesting. There are certain security guarantees; and the rules are deterministic and known (you put in money into the machine something comes out). I have heard the vending machine analogy from Chain.com as well, the concepts share the same vision of taking the best of public blockchain technology and the best of existing financial services.

 

2017-09-16 01_49_44-Untitled - Notepad

2017-09-20 09_43_59-Dapps.ai Platform - PowerPoint

2017-09-16 02_00_15-Untitled - Notepad

 

Salesforce DX Platform Development

Salesforce DX is a new integrated experience designed for high-performance, end-to-end agile Salesforce development that is both open and flexible.

Before Salesforce DX, all custom objects and object translations were stored in one large Metadata API source file.

Salesforce DX solves this problem by providing a new source shape that breaks down these large source files to make them more digestible and easier to manage with a version control system.

2017-07-06 01_31_57-workspace - Force.com IDE

A Salesforce DX project stores custom objects and custom object translations in intuitive subdirectories. This source structure makes it much easier to find what you want to change or update. Once you have developed and tested your code you can then convert the format of your source so that it’s readable by the Metadata API, package it and deploy  it.

aplx

This is a complete shift from from monolithic, org-based development to modular, artifact-based development while also enabling continuous integration (CI) and continuous delivery (CD). This means development teams can develop separately and build toward a release of the artifact, and not a release of updates to the org.

This will be a working post as I continue to go through the release notes and work with the new platform tools.

Resources

Get Started with Salesforce DX

Salesforce DX Developer Guide

Salesforce CLI Command Reference

Install the Tools:

DX CLI – install here

Force IDE 2 – install using this link and install preequesties Java kit here

Dev Hub (Trial Org) – sign up for a dev hub trial

Create your directory

Open up Git

Cd c:

Mkdir dx

Cd dx

Mkdir projectname

Cd projectname

 

Configuring your DX project

Salesforce DX projects use a particular directory structure for custom objects, custom object translations, Lightning components, and documents. For Salesforce DX projects, all source files have a companion file with the “-meta.xml” extension.

When in the project directory:

sfdx force:project:create -n projectname

ls

you will see a few files.

The project configuration file sfdx-project.json indicates that the directory is a Salesforce DX project.

If the org you are authorizing is on a My Domain subdomain, update your project configuration file (sfdx-project.json)

“sfdcLoginUrl” : “https://somethingcool.my.salesforce.com”

Package directories indicate which directories to target when syncing source to and from the scratch org. Important: Register the namespace with Salesforce and then connect the org with the registered namespace to the Dev Hub org.

Creating your VCS Repo

You can now initiate a git repo and connect it to bitbucket or github.

create bitbucket / github repo

git init
git add .
git commit -am ‘first dx commit’
git push -u origin master

if you go to your repo in the browser you should see the dx json config file and sfdx structure.

cd projectname

Creating a Scratch Org

sfdx force:org:create -s -f config/project-scratch-def.json -a ipfsDXScratch

You don’t need a password but can add one using force:user:password:generate

You can also target a specific dev hub that isnt the default.

sfdx force:org:create --targetdevhubusername jdoe@mydevhub.com --definitionfile my-org-def.json --setalias yet-another-scratch-org

–targerdevhubusername to set a dev org not the default

The Scratch Org

You can have

  • 50 scratch orgs per day per Dev Hub
  • 25 active scratch orgs,
  • They are deleted after 7 days

The scratch org is a source-driven and disposable deployment of Salesforce code and metadata. A scratch org is fully configurable, allowing developers to emulate different Salesforce editions with different features and preferences. And you can share the scratch org configuration file with other team members, so you all have the same basic org in which to do your development.

The project.json file is what makes the project and scratch org configurable. You can now define what type of org, permissions and settings you want to build and test in. It is a way to build and test with certain platform features, application settings and circumstances while also being able to version and have source control.

This also enables continuous integration because you can test against a use case in one of the scratch orgs and see the impact based on the configuration or by importing data from  the source org. Once bulkified and tested it can be pushed back into the source org.

Scratch orgs drive developer productivity and collaboration during the development process, and facilitate automated testing and continuous integration.

Use many scratch org config files – name them . They are the blueprint for the org shape.

Ethereum-scratch-def.json

Production-scratch-def.json

devEdition-scratch-def.json

{ “orgName”: “Acme”, “country”: “US”, “edition”: “Enterprise”, “features”: “MultiCurrency;AuthorApex”, “orgPreferences”: { “enabled”: [“S1DesktopEnabled”, “ChatterEnabled”], “disabled”: [“SelfSetPasswordInApi”] } }

Here is a complete list for the Features and Preferences that you can configure.

Authorizing your DX Project

sfdx force:auth:web:login --setalias my-sandbox

This will open the browser and you can login

sfdx force:org:open  --path one/one.app

For JWT and CI

sfdx force:auth:jwt:grant --clientid 04580y4051234051 --jwtkeyfile /Users/jdoe/JWT/server.key --username jdoe@acdxgs0hub.org --setdefaultdevhubusername --setalias my-hub-org --instanceUrl https://test.salesforce.com

Creating

Create a Lightning Component / Apex Controller / Lightning Event at command line:

sfdx force:lightning:component:create -n TokenBalance -d force-app/main/default/aura

 

Testing

sfdx force:apex:test:run --classnames TestA,TestB –-resultformat tap –codecoverage

Packaging

First convert from Salesforce DX format back to Metadata API format

sfdx force:source:convert --outputdir mdapi_output_dir --packagename managed_pkg_name

Deploy to the packaging org

sfdx force:mdapi:deploy --deploy_dir mdapi_output_dir --targetusername me@example.com

Creating a Beta version

  1. Ensure that you’ve authorized the packaging org.
  2. sfdx force:auth:web:login --targetusername me@example.com
  3. Create the beta version of the package.
  4.  sfdx force:package1:version:create --packageid package_id --name package_version_name

Managed-Released Version

Later, when you’re ready to create the Managed – Released version of your package, include the -m (–managedreleased true) parameter

sfdx force:package1:version:create --packageid 033xx00000007oi --name ”Spring 17” --description ”Spring 17 Release” --version 3.2 –managedreleased

After the managed package version is created, you can retrieve the new package version ID using force:package1:version:list

Installing a package

sfdx force:package:install --packageid 04txx000000069zAAA –targetusername –installtionkey

sfdx force:package1:version:list --

(use a bitcoin/ethereum/token address as the package installation key)

Check to see if has paid and if it has, it installs.

Push and Pull

sfdx force:source:push

This is used to push data from your project to a scratch org

It will push the source code to the scratch org you have set as the default, or you can specify an org other than the default by using

–targetusername

-u

.forceignore – Any source file or directory that begins with a “dot,” such as .DS_Store or .sfdx, is excluded by default.

sfdx force:source:pull

This is used to pull data from your scratch org to your project

Setting Aliases

sfdx force:alias:set org1= name org2= name2

To remove an alias, set it to nothing.

 sfdx force:alias:set my-org=

Listing your Orgs

sfdx force:org:list

(D) points to the default Dev hub username

(U) points to the default scratch org username

Retrieving and Converting Source from a Managed Package

Working with Metadata has usually required a tool like ANT. Now you can retrieve unmanaged and managed packages into your Salesforce DX project. This is ideal if you have a managed package in a packaging org. You can retrieve that package, unzip it to your local project, and then convert it to Salesforce DX format, all from the CLI.

Essentially you can take your existing package and re shape it into a Salesforce DX Project format

Custom Object with subdirectories:

businessProcesses

  • compactLayouts
  • fields
  • fieldSets
  • listViews
  • recordTypes
  • sharingReasons
  • validationRules
  • webLinks

 

Mkdir mdapipkg

sfdx force:mdapi:retrieve -s -r ./mdapipkg -u username -p packagename

sfdx force:mdapi:retrieve -u username  -i jobid

Convert the metadata API source to Salesforce DX project format.

sfdx force:mdapi:convert –rootdir  <retrieve dir name  --outputdir

Additional Things

To get JSON responses to all Salesforce CLI commands without specifying the –json option each time, set the SFDX_CONTENT_TYPE environment variable. export SFDX_CONTENT_TYPE=JSON

Log Levels  –loglevel DEBUG

ERROR WARN   INFO    DEBUG TRACE

To globally set the log level for all CLI commands, set the SFDX_LOG_LEVEL environment variable. For example, on UNIX: export SFDX_LOG_LEVEL=DEBUG

How to setup and build Hyperledger Fabric Blockchain Applications

This is an introduction to how to configure and launch the Hyperledger Fabric v1.0 permissioned blockchain network on an Ubuntu 16.04 Virtual Machine on AWS.

If you want to skip configuring the VM/images check out the IBM Bluemix Fabric managed service: https://console.ng.bluemix.net/catalog/services/blockchain/

Below is the command line steps in addition to links to additional guides on configuring your network.

The first part of this article is focused on the infrastructure layer, Hyperledger Fabric.

The second part of this article is focused on the application layer, Fabric Composer.

You can spin up the latest versions of Fabric V1 and use the shell scripts for standup and teardown of the network in the second part.

The Infrastructure Layer

Creating your VM on AWS

The first thing you are going to do is go to AWS and create a new account:

Go to https://aws.amazon.com/

Create your account and you will need to create a support request to increase the number of EC2 instances you can have.

Once your limit has increased you will launch a new EC2 Virtual Machine using the Launch Wizard.

Choose your Instance Type: c3.large

Go through the configuration screens and add storage to your VM 8/32/64 GB should work.

Once this has been done go to the Launch screen and generate a new key-pair for this Virtual Machine.

You will download the key pair and put it in some folder on your local machine.

Next the instance will launch and you will be able to see on your dashboard.

Copy the URL from your Virtual Machine

Open Git on your machine and go to the directory where you have downloaded the key to your Virtual Machine.

then ssh -i ./yourkeyname.pem ubuntu@yoururlname

once you have sshed in, you are now able to use the Virtual Machine.

The below is the exact steps or you can continue to the more verbose version in the rest of the article.

This is the exact step by step way to do it in less then 20 minutes:


sudo apt-get update
wget https://storage.googleapis.com/golang/go1.7.4.linux-amd64.tar.gz
sudo tar -xvf go1.7.4.linux-amd64.tar.gz
sudo apt-key adv –keyserver hkp://p80.pool.sks-keyservers.net:80 –recv-keys 58118E89F3A912897C070ADBF76221572C52609D
sudo apt-add-repository ‘deb https://apt.dockerproject.org/repo ubuntu-xenial main’
sudo apt-get install -y docker-engine
sudo curl -o /usr/local/bin/docker-compose -L “https://github.com/docker/compose/releases/download/1.11.2/docker-compose-$(uname -s)-$(uname -m)”
sudo chmod +x /usr/local/bin/docker-compose
mkdir ~/fabric-tools && cd ~/fabric-tools
curl -O https://raw.githubusercontent.com/hyperledger/composer-tools/master/packages/fabric-dev-servers/fabric-dev-servers.zip
sudo apt install unzip
unzip fabric-dev-servers.zip
curl -O https://hyperledger.github.io/composer/prereqs-ubuntu.sh
chmod u+x prereqs-ubuntu.sh
./prereqs-ubuntu.sh
cd fabric-tools
./downloadFabric.sh
./startFabric.sh
./createComposerProfile
cd..
git clone https://github.com/hyperledger/composer-sample-networks.git
cp -r ./composer-sample-networks/packages/basic-sample-network/ ./my-network
rm -rf composer-sample-networks
cd my-network
npm install
npm install -g composer-cli
npm install -g composer-rest-server
cd dist
composer network deploy -p hlfv1 -a basic-sample-network.bna -i PeerAdmin
composer-rest-server

Don’t forget to change your security groups to open inbound/outbound traffic!


 

Configuring your Virtual Machine

There are a number of packages that you are going to need to install and configure in your VM.

Install Git, Nodejs and GO

Digital Ocean Guide – Node.js

Installing GO and setting your GOPATH

*** update USE NVM for node and npm, no sudo required, will be important for your fabric composer setup ***

sudo apt-get update

sudo apt-get install git

sudo apt-get install nodejs use nvm

sudo apt-get install npm  use nvm

wget https://storage.googleapis.com/golang/go1.7.4.linux-amd64.tar.gz
$ sudo tar -xvf go1.7.4.linux-amd64.tar.gz
$ sudo mv go /usr/local
cd ubuntu
mkdir yourgodirectoryname
export GOROOT=/usr/local/go 
export GOPATH=$HOME/yourgodirectoryname
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
sudo nano ~/.profile

Add the above three commands to the bttom of the /.profile file

Install Docker 1.12 and Docker-Compose 1.18

Digital Ocean Guide

sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
sudo apt-add-repository 'deb https://apt.dockerproject.org/repo ubuntu-xenial main'
sudo apt-get update

sudo apt-get install -y docker-engine

docker


sudo curl -o /usr/local/bin/docker-compose -L "https://github.com/docker/compose/releases/download/1.11.2/docker-compose-$(uname -s)-$(uname -m)"
sudo chmod +x /usr/local/bin/docker-compose
docker-compose -v

*****At this point you should have a Linux VM fully setup and you are now ready to download the docker images for the various components of your Blockchain network.****

Clone the Hyperledger Fabric v1 Images

You can download the images here and this will show you how to call chaincode directly at the command line. Later on we will use Composer to spin up docker images and write to the chain by deploying a business network archive.

Architecture Overview
All Hyperledger projects follow a design philosophy that includes a modular extensible
approach, interoperability, an emphasis on highly secure solutions, a token-agnostic
approach with no native cryptocurrency, and the development of a rich and easy-touse
Application Programming Interface (API). The Hyperledger Architecture WG has
distinguished the following business blockchain components:
• Consensus Layer – Responsible for generating an agreement on the order and
confirming the correctness of the set of transactions that constitute a block.
• Smart Contract Layer – Responsible for processing transaction requests and
determining if transactions are valid by executing business logic.
• Communication Layer – Responsible for peer-to-peer message transport between
the nodes that participate in a shared ledger instance.
• Data Store Abstraction – Allows different data-stores to be used by other modules.
• Crypto Abstraction – Allows different crypto algorithms or modules to be swapped
out without affecting other modules.
• Identity Services – Enables the establishment of a root of trust during setup of a
blockchain instance, the enrollment and registration of identities or system entities
during network operation, and the management of changes like drops, adds, and
revocations. Also, provides authentication and authorization.
• Policy Services – Responsible for policy management of various policies specified
in the system, such as the endorsement policy, consensus policy, or group
management policy. It interfaces and depends on other modules to enforce the
various policies.
• APIs – Enables clients and applications to interface to blockchains.
• Interoperation – Supports the interoperation between different blockchain instances.

 

SOURCE: https://www.hyperledger.org/wp-content/uploads/2017/08/HyperLedger_Arch_WG_Paper_1_Consensus.pdf

Getting Started

This is the Fabric Getting Started Guide and here is the end-to-end guide on Github

cd ubuntu
cd yourgodirectoryname
mkdir src
cd src
mkdir github.com
cd github.com
mkdir hyperledger
cd hyperledger
git clone https://github.com/hyperledger/fabric.git
sudo apt install libtool libltdl-dev

Download the Docker Images for Fabric v1.0

cd fabric
make release-all
make docker
docker images

You should see your docker images    (this is for x86_64-1.0.0-alpha2)

REPOSITORY TAG IMAGE ID CREATED SIZE
dev-peer0.org1.example.com-marbles-1.0 latest 73c7549744f3 6 days ago 176MB
hyperledger/fabric-couchdb latest 3d89ac4895f9 12 days ago 1.51GB
hyperledger/fabric-couchdb x86_64-1.0.0-alpha2 3d89ac4895f9 12 days ago 1.51GB
hyperledger/fabric-ca latest 86f4e4280690 12 days ago 241MB
hyperledger/fabric-ca x86_64-1.0.0-alpha2 86f4e4280690 12 days ago 241MB
hyperledger/fabric-kafka latest b77440c116b3 12 days ago 1.3GB
hyperledger/fabric-kafka x86_64-1.0.0-alpha2 b77440c116b3 12 days ago 1.3GB
hyperledger/fabric-zookeeper latest fb8ae6cea9bf 12 days ago 1.31GB
hyperledger/fabric-zookeeper x86_64-1.0.0-alpha2 fb8ae6cea9bf 12 days ago 1.31GB
hyperledger/fabric-orderer latest 9a63e8bac1f5 12 days ago 182MB
hyperledger/fabric-orderer x86_64-1.0.0-alpha2 9a63e8bac1f5 12 days ago 182MB
hyperledger/fabric-peer latest 23b4aedef57f 12 days ago 185MB
hyperledger/fabric-peer x86_64-1.0.0-alpha2 23b4aedef57f 12 days ago 185MB
hyperledger/fabric-javaenv latest a9ca2c90a6bf 12 days ago 1.43GB
hyperledger/fabric-javaenv x86_64-1.0.0-alpha2 a9ca2c90a6bf 12 days ago 1.43GB
hyperledger/fabric-ccenv latest c984ae2a1936 12 days ago 1.29GB
hyperledger/fabric-ccenv x86_64-1.0.0-alpha2 c984ae2a1936 12 days ago 1.29GB
hyperledger/fabric-baseos x86_64-0.3.0 c3a4cf3b3350 4 months ago 161MB

Create your network artifacts

cd examples
cd e2e_cli
./generateArtifacts.sh <channel-ID>

Make sure to choose a channel name for <channel-ID> ie mychannel

then the below to launch the network

CHANNEL_NAME=<channel-id> TIMEOUT=<pick_a_value> docker-compose -f docker-compose-cli.yaml up -d

or use this script to launch the network in one command.

./network_setup.sh up <channel-ID> <timeout-value>
docker ps

To manually run and call the network use

Open the docker-compose-cli.yaml file and comment out the command to run script.sh. Navigate down to the cli container and place a # to the left of the command. For example:

  working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
# command: /bin/bash -c './scripts/script.sh ${CHANNEL_NAME}; sleep $TIMEOUT'

Save the file and return to the /e2e_cli directory.

# Environment variables for PEER0
CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/peer/peer0/localMspConfig
CORE_PEER_ADDRESS=peer0:7051
CORE_PEER_LOCALMSPID="Org0MSP"
CORE_PEER_TLS_ROOTCERT_FILE=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/peer/peer0/localMspConfig/cacerts/peerOrg0.pem
docker exec -it cli bash
peer channel join -b yourchannelname.block
peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02
# remember to preface this command with the global environment variables for the appropriate peer
# remember to pass in the correct string for the -C argument.  The default is mychannel
peer chaincode instantiate -o orderer0:7050 --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem -C mychannel -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c '{"Args":["init","a", "100", "b","200"]}' -P "OR ('Org0MSP.member','Org1MSP.member')"
peer chaincode invoke -o orderer0:7050  --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem  -C mychannel -n mycc -c '{"Args":["invoke","a","b","10"]}'
peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'

You now should have a functioning network that you can call.

To Setup and use Couch DB for richer queries follow the tutorial here.

To clean and bring down the network use 

./network_setup.sh down
sudo docker ps

you shouldn’t see any containers running.

To stop individual various containers use

sudo docker stop <name>

sudo docker rm <name>

To learn about the Hyperledger Fabric API visit http://jimthematrix.github.io/ This is a great resource for learning about the intricacies of the network and the different certificates needed for trusted transactions.

Below is the second part the Application Layer, Fabric Composer.

The Application Layer

You have configured and setup a VM, you have your docker containers running, your Hyperledger Fabric V1.0 blockchain infrastructure is live; now you want to model, build and deploy a business application on this network. This article will show you exactly how to wire up the application layer and the infrastructure layer.

You can run the network locally using npm or docker

  npm install -g composer-playground
 docker run -d -p 8080:8080 hyperledger/composer-playground

Once you have configure your business network you can export it in a BNA (business network archive file).

 

The Business Network

Permissioned blockchain technology applications built for the enterprise and commercial applications need an abstraction layer. This is provided by Fabric Composer; a toolset and application framework that enables you to quickly model and deploy applications to Fabric infrastructure. It is a framework that enables you to reason about the Participants, the Assets, and the Transaction logic that drive state changes on your distributed ledger. This business logic and processes are what will drive the distributed state changes to the peers on your network.

Hyperledger Fabric Composer

Fabric Composer is an open-source project and part of the Hyperledger Foundation.

First, ssh into the same VM you setup.

ssh -i ./yourkeyname.pem ubuntu@yoururlname

Installing Fabric Composer Dev Tools

You should have the majority of these tools installed in your configured VM but this script will make sure everything is correct versions and you haven’t missed anything.

curl -O https://hyperledger.github.io/composer/prereqs-ubuntu.sh

chmod u+x prereqs-ubuntu.sh
./prereqs-ubuntu.sh
  1. To install composer-cli run the following command:
    npm install -g composer-cli
    

    The composer-cli contains all the command line operations for developing business networks.

  2. To install generator-hyperledger-composer run the following command:
    npm install -g generator-hyperledger-composer
    

    The generator-hyperledger-composer is a Yeoman plugin that creates bespoke applications for your business network.

  3. To install composer-rest-server run the following command:
    npm install -g composer-rest-server
    

    The composer-rest-server uses the Hyperledger Composer LoopBack Connector to connect to a business network, extract the models and then present a page containing the REST APIs that have been generated for the model.

  4. To install Yeoman run the following command:
    npm install -g yo
    

    Yeoman is a tool for generating applications. When combined with the generator-hyperledger-composer component, it can interpret business networks and generate applications based on them.

If you use VSCode, install the Hyperledger Composer VSCode plugin from the VSCode marketplace. There is also a plugin for Atom as well.

Make sure you have the V1 images, if you have any old ones use:

sudo docker rmi <image> --force
sudo docker container prune

Fabric Tools 

mkdir ~/fabric-tools && cd ~/fabric-tools

curl -O https://raw.githubusercontent.com/hyperledger/composer-tools/master/packages/fabric-dev-servers/fabric-dev-servers.zip
sudo apt install unzip
unzip fabric-dev-servers.zip
export FABRIC_VERSION=hlfv1

To download Hyperledger-Fabric use the script below:

cd ~/fabric-tools
./downloadFabric.sh
sudo docker images
hyperledger/fabric-ca x86_64-1.0.1 5f30bda5f7ee 2 weeks ago 238MB
hyperledger/fabric-couchdb x86_64-1.0.1 dd645e1e92c7 2 weeks ago 1.48GB
hyperledger/fabric-orderer x86_64-1.0.1 bbf2708c9487 2 weeks ago 179MB
hyperledger/fabric-peer x86_64-1.0.1 abb05def5cfb 2 weeks ago 182MB
hyperledger/fabric-ccenv x86_64-1.0.1 7e2019cf8174 2 weeks ago 1.29GB

You can start Hyperledger Fabric using this script:

./startFabric.sh

if you run:

sudo docker ps

You should see your network is up and running

Create and Connect your Hyperledger Composer Profile

Hyperledger Fabric is distinguished as a platform for permissioned networks, where all participants have known identities.

UPDATE: ID Cards

The below line should be a huge ah-hah moment.

**********An ID Card contains an Identity for a single Participant within a deployed business network. ************

You have defined a Participant in your business network (modeled, but also actually created a record of one) given the participant an ID (id, name email) and now; the above. The ID Card contains an Identity for that single Participant within the deployed business network.

It is like your auth method. You create the participant on the network and then you can go and create an ID that lets you sign as as that participant. Then create another participant, maybe same type, different type; then create another ID for that one.

*** Command Line – Create Profile ***

Issue from the fabric-tools directory ./createComposerProfile.sh

This script will create an PeerAmin Profile for you.

Connection profiles are a bit confusing and can be frustrating to setup, but, this is an integral part to being able to build and deploy your business network.

There are two folders on your machine:

cd ~/.composer-credentials

and

cd ~/.composer-connection-profiles

When you run Composer locally using Docker containers the Profiles you create will be stored there.

You can find you and PeerAdmin Profile pub, priv keys in the composer-credentials directory.

You then have to import you Composer Profile using this command:

$ composer identity import -p hlfv1 -u PeerAdmin -k
/c/Users/domst/.composer-credentials/114aab0e76bf0c78308f89efc4b8c9423e31568da0c340ca187a9b17aa9a4457-priv
 -c /c/Users/domst/.composer-credentials/114aab0e76bf0c78308f89efc4b8c9423e31568da0c340ca187a9b17aa9a4457-pub

OR

composer identity import -p hlfv1 -u PeerAdmin -c ${HOME}/.composer-credentials/114aab0e76bf0c78308f89efc4b8c9423e31568da0c340ca187a9b17aa9a4457-pub -k ${HOME}/.composer-credentials/114aab0e76bf0c78308f89efc4b8c9423e31568da0c340ca187a9b17aa9a4457-priv

You should get a Command Successful Message.

You may run into a problem command if your setup is still sudoed out, so you will need to check and see using

ls -la /home/ubuntu/.composer-credentials

Then if it is run

sudo chown -R ubuntu /home/ubuntu/.composer-credentials

Network Teardown

Also, here are some more scripts more to stop and tear down the infrastructure layer.

./stopFabric.sh

At this point you can go should be ready to get dialed in and ready to model out and wire up a business network. Grab some coffee, a nice glass of water, new piece of gum; you’re just about to get going. This next part we are going to connect a Sample Business Network to your Hyperledger Fabric V1 Blockchain.

To start over do this

./teardownFabric.sh

Or continue On… to connect a Sample Business Network to your Hyperledger Fabric V1 Blockchain.

Building Your Business Network

This section isn’t mandatory but if you want to use the playground editor this is some background on how to to access it in the browser Or you can skip this and use vim.

Make sure fabric composer is running, your security groups have inbounds and outbounds open and go to your amazon web services url:

then go to the extension for composer editor.

http://yourURL.compute.amazonaws.com:8080/editor

Model your Business Network using the Hyperledger Composer Playground. A Hyperledger Composer Business Network Definition is composed of a set of model files and a set of scripts.  This can be run in the browser or locally using a docker container.

Modeling a Business Network consist of :

  • Participants – Members of the Business Network
  • Assets –
  • Transactions – State change mechanism of the Network

You also are able to use:

  • Concepts
  • enum
  • abstract

Lastly, the Business Network can define:

  • Events – Defined indicators that can be subscribed to by external systems
  • Access Control – Permissions for state changes

You can use Javascript to define the transactional logic that you would like to use in your applications. We are just going to use a Sample Business Network, this can be edited and redeployed to update the Blockchain.

Clone a sample network using this repo:

git clone https://github.com/hyperledger/composer-sample-networks.git
cd packages

cd basic-sample-network

npm install

You should get something like this:

Creating Business Network Archive

Looking for package.json of Business Network Definition
Input directory: /home/ubuntu/my-network

Found:
Description: The Hello World of Hyperledger Composer samples
Name: my-network
Identifier: my-network@0.1.8

Written Business Network Definition Archive file to
Output file: ./dist/my-network.bna

Command succeeded

 

Deploying Your Business Network To Your Blockchain Infrastructure

Once you have your Sample Business Network you are going to want to create a BNA file. A BNA is the Business Network Archive, it is a file that describes your network configuration and application and can be deployed the the infrastructure you have setup. To deploy your network use:

composer network deploy -a my-network.bna -p hlfv1 -i PeerAdmin -s randomString

TROUBLESHOOTING:

✖ Deploying business network definition. This may take a minute…

Error: Error trying deploy. Error: Error trying install chaincode. Error: Failed to deserialize creator identity, err ParseCertificate failed asn1: structure error: tags don’t match (2 vs {class:0 tag:6 length:7 isCompound:false}) {optional:false explicit:false application:false defaultValue:<nil> tag:<nil> stringType:0 timeType:0 set:false omitEmpty:false} @2
Command failed

Stack Overflow Answer

Setting up our REST Server

Hyperledger Fabric v1.0 provides basic API using Protocol Buffers over gRPC for applications to interact with the blockchain network. Composer enables you to create REST server and communicate with the Blockchain network using JSON. This part is important and you should have an understanding of it before starting on modeling out your business network / application. Composer enables you to create a REST server that is dynamically generated based on the business network participants and asset you design. You can configure your network and launch a REST Api that can be called from other applications.

composer-rest-server

Wiring It All Up

Once your business Network is deployed to your infrastructure you should be able to verify it using the top right, your connection profile is connected. You are now ready to send transactions to the blockchain directly from the composer interface or by calling your REST server from a client.

If you are running into any trouble feel free to reach out at dominic@dapps.ai

Versions:

https://gateway.ipfs.io/ipfs/QmXMaNJRGstyib8iAZCZ5HxShorCxCDFDzeG2Hx73hTYcp

5.26 – using v1.0

https://gateway.ipfs.io/ipfs/QmWtaoMGA8SVc7kdjUnRP5hS1KsX51TEYH57VNTPJWQPFh

5.27 – using v1.0 alpha2 docker images and added in fabric composer (current)

https://gateway.ipfs.io/ipfs/Qme6FT3HQzUsY8z69P6KCutaUogmc4BBxCSPJDt3czaM6M

 

IPLD Resolvers

This was in my drafts from a few months back, right when I got really excited about IPFS, content addressing data, and the potential future applications that will be built on the protocol.

Content addressing is a true computational advancement in the way that we think about adding and retrieve content on the web. We can take existing databases and use the various parts of the IPFS protocol to build clusters of nodes that are serving content in the form of IPLD structures.

IPLD enables futureproof, immutable, secure graphs of data the essentially are giants Merkle Trees. The advantages of converting a relational database or key value db into a merkle tree are endless.

The data and named links gives the collection of IPFS objects the structure of a Merkle DAG — DAG meaning Directed Acyclic Graph, and Merkle to signify that this is a cryptographically authenticated data structure that uses cryptographic hashes to address content.

This protocol enables an entirely new way to search and retrieve data. It is no longer about where a file is located on a website. It is now about what exact piece of data you are looking for. If I send you an email with a link in it and 30 days later you click the link, how can you be certain that the data you are looking at is the same as what I original sent you? You can’t.

With IPLD you can use content addressing to know for certain that a piece of content has not changed. You can can traverse the IPLD object and seamlessly pick out piece of the data. By using IPLD once that data is locally cached you can use the application offline.

You can work on an application with other peers around you as well using a shared state. But why is this is important for businesses?

Content addressing for companies will ensure a number of open standards. We can now take fully encrypted private networks that are content addressed.

This is a future proof system. Hashes that are currently used in databased can be broken but now we have multi-hashing protocols.

We can build blockchains that use IPLD and libp2p.

The IPLD resolver uses two main pieces; the bitswap and the blocks-service.

Bitswap is transferring blocks and blocks-services is determining what needs to be fetched based on what is currently in the local cache and what needs to be fetched. This prevents duplication and increase efficiencies in the system.

We will be creating a resolver for the enterprise that enables them to take their existing database tables and convert them into giant merkle trees that are interoperable. IPLD is like a global adapter for cryptographic protocols.

Creating the Enterprise Forrest

Here is where it gets interesting. The enterprise uses a few different types of database; relational SQL databases, noSQL distributed databases.

Salesforce is a another type of database that we can take and convert into a Merkle Tree.

S3, is another type of database,

I call this tables to trees.

We are going to take systems that are onPrem siloed or centralized to a cloud provider and turn them into merkle trees using IPLD content addressing.

The IPLD resolver is an internal DAG API module:

We can create a plug and play system of resolvers. This is where a company can take their existing relational database and keep it.

We will resolve the database and run a blockchain in parallel. This blockchain will be built using two protocols that are from the IPFS project: ipld and libp2p

The IPLD Resolver for the enterprise will consist of

.put

.get

.remove

.support.add

.support.rm

We will take any enterprise database and build out the content addressed merkle tree on it.

Content Addressing enterprise content on IPFS-Clusters

Enterprise can consist of 10,000 nodes 50,000 nodes 100,000 nodes and the IPLD object has to be under 1 MB.

That all will be hosting the merkle tree mirror of the relational database.

This can also enable offline operation for the network. Essentially they have their own protocol that is mirroring their on-premise system.

We will be starting with dag-mySQL, dag-noSQL, dag-apex

The MySQL hash function that exist on the on premise system stays when implemented. If that hash is ever broken there is no way to upgrade the system with up completely migrating it.

This makes data migration a lot easier or not even necessary in the future. Once the data is content addressed and creates the merkle tree, we can then start traversing the data.

We will also build interfaces that can interact with the IPLD data

IPLD is a format that can enable version control. The resolver will essentially take any database, any enterprise implementation, and convert it into a merkle tree.

We are essentially planting the seeds (product) and watering them (services). Once these trees are all in place that can communicating because they are all using the same data format.

Future Proofing your Business

We are creating distributed, authenticated, hash-linked data structures. IPLD is a common hash-chain format for distributed data structures.

Each node in these merkle trees will have a unique Content Identified – a format for these hash-links.

This is a database agnostic path notation any hash – any data format.

This will have a multihash – multiple cryptographic hashes, multicodec – multiple serialization formats, multibase – multiple base encodings 

Again, why is the important for businesses? The most important is transparency and security, this is a tamper proof, tamper evident database that can be shared, traversed, replicated, distributed, cached, encrypted and you know now exactly WHAT you are linking to, not where. You know which hash function to verify with. You know what base it is in.

This new protocol enables cryptographic systems to interoperate over a p2p network that serves hash linked data.

IPFS 0.4.5 includes that dag command that can be used to traverse IPLD objects.

Now to write the dag-sql resolver.

Take any existing relational database and you can now traverse that database content addressing.

Content Addressing your database to a cluster of IPFS Cluster nodes on a private encrypted network.

Deterministic head of the cluster then writes new entries.

We use the Ethereum network to assign a key pair for your users to leverage the mobile interface. You can sign in via fingerprint or by facial recognition using the Microsoft Cognitive Toolkit. Your database will run in parallel , you will keep your on premise system and have a content addressed. Content Addressing a filesystem or any type of database creates a Merkle DAG. With this Merkle DAG we can format your data in a way that is secure, immutable, tamper proof, futureproof and able to communicate with other underlying network protocols and apllication stacks. We can effectively create a blockchain network out of your exisiting database the runs securely on a cluster of p2p nodes. I am planting business merkle dag seeds in the merkle forrest. Patches of these forrest will be able to communicate with other protocols via and hash and in any format.

This is the way that the internet will work going into the future. A purely decentralized web of business trees.

 

Continued:

On Graph Data Types

DAG:

In a graph model, each vertex consists of:

  • A unique identifier
  • a set of outgoing edges
  • a set of incoming edges
  • a collection of properties (key-value pairs)

Each edge consists of:

  • A unique identifier
  • The vertex at which the edge starts (the tail vertex)
  • The vertex at which the edge ends ( the head vetex)
  • A label to describe the kind of relationahship between the two vertices
  • A collection of properties (key-value pairs)

Important aspects of the model:

Any vertex can have an edge connecting it with an other vertex. There is no schema that restricts which kinds of things can or cannot be associated.

Given an vertex, you can efficiently find both its incoming and its outgoing edges, and thus traverse the graph – ie. follow a a path through a chain of vertices- both forward and backward. ie ie why you can traverse a hashed blockchain with a resolver.

By using different lables for different kinds of relationships, you can store several different kinds of information in a single graph, while still maintaining a clean data model.

 

 

Web3, Solc, IPFS and IPLD

Web3: JavaScript API for interacting with the Ethereum Virtual Machine

Solidity: Smart Contract Programming Language

IPFS: Distributed File System

This is a quick introduction to calling the Ethereum Virtual Machine using the web3 API, compiling Solidity Smart Contracts, and traversing content addressed data structures on the Interplanetary File System.

These are some of the core technologies that will be used to build  Ðapps.

npm install web3 –save

npm install solc –save

npm install ipfs-api –save

The Ethereum JS Util Library- “ethereumjs-util”: “4.5.0”

The Ethereum JS Transaction Library – “ethereumjs-tx”: “1.1.2”

The first thing we need to do is get testrpc up and running. Depending on the type of machine you have this could be rather straight forward or it may take a while. The link below should point you in the right direction.

Configure testrpc

Sending Transactions

Once your test Ethereum Node is running, instantiate a new web3 object. This can be done by doing the following:

Start up node in the console:

var Web3 = require(“web3”)

var web3 = new Web3(new Web3.providers.HttpProvider(http://localhost:8545))

We can now call different web3 APIs. Test out the connection to TESTRPC by calling:

web3.eth.accounts

You will see all 10 accounts generated from the TESTRPC.

web3.eth.accounts[0]

You will see the address for the first address generated from the TESTRPC.

web3.eth.getBalance(web3.eth.accounts[0])

You should be returned the balance address from your first TESTRPC address.

Converting from Wei to Ether

web3.fromWei(web3.eth.getBalance(web3.eth.accounts[0]), ‘ether’).toNumber()

var acct1 = web3.eth.accounts[0]

var acct 2 = web3.eth.accounts[1]
var balance = (acct) => { return web3.fromWei(web3.eth.getBalance(acct), ‘ether’).toNumber() }
balance(acct1)

balance(acct2)

Send an Ethereum Transaction

web3.eth.sendTransaction({from: acct1, to: acct2, value: web3.toWei(1, ‘ether’), gasLimit: 21000, gasPrice: 2000000000 nonce: })

Send a Raw Ethereum Transaction

var EthTx = require(“ethereumjs-tx”)

var pKey1x = new Buffer(pKey1, ‘hex’)
pkey1x

var rawTx = {

nonce: web3.toHex(web3.eth.getTransactionCount(acct1)),

to: acct2,

gasPrice: web3.toHex(2000000000),

gasLimit: web3.toHex(21000),

value: web3.toHex(web3.toWei(25, ‘ether’)),

data: “”}

var tx = new Ethtx(rawTc0)

tx.sign(pKey1x)

tx.serialize().toString(‘hex’)

web3.eth.sendRawTransaction(Ox${tx.serialize().toString(‘hex’)}`, (error, data) => {

if(!error) { console.log(data) }

})

Creating Smart Contracts

var Web3 = require(“web3”)

var solc = require(“solc”)

var web3 = new Web3 (new Web3.providers.HttpProvider(http://localhost:8545))

Create a source variable with your Solidity code: (This can be any smart contract code, unless you are trying to import in another contract at the top).

var source = `{ contract Messenger {

function displayMessage() constant returns (string) {

return "This is the message in the smart contract";

}

}`
var compiled = solc.compile(source)
source

You can then start to unpack the different parts of the compiled contract:

compiled.contracts.Messenger

compiled.contracts.Messenger.bytecode

compiled.contracts.Messenger.opcodes

compiled.contracts.Messenger.interface
var abi = JSON.parse(compiled.contracts.Messenger.interface)

In order to deploy the contract to the network you will need the JSON Interface of the Contract, the abi (application binary interface).

var messengerContract = web3.eth.contract(abi)
messengerContract
var deployed = messengerContract.new({

... from: web3.eth.accounts[0],

... data: compiled.contracts.Messenger.bytecode,

... gas: 470000,

... gasPrice: 5,

... }, (error, contract) => { })

You should now see in the testrpc the transaction broadcasted to the network.

Web3.eth.getTransaction(“0x0”)

Call a Function in the Contract

deployed.displayMessage.call()

IPFS  – Hashlink Technology

IPFS defines a new paradigm in the way that we can organize, traverse, and retrieve data using a p2p network that serves hash linked data. This is done by using merkle-links between files that are distributed on the interplanetary file system.

2017-02-14 07_31_46-ipfs_js-ipfs_ IPFS implementation in JavaScript.png

Content-addressing enables unique identifies and unique links between data that lives on the distributed network. It creates distributed, authenticated, hash-linked data structures. This is achieved by Merkle-Links and Merkle-Paths.

A merkle-link is a link between two objects which is content-addressed with the cryptographic hash of the target object, and embedded in the source object. This is the way  the Bitcoin and Ethereum blockchains work, they are both essentially giant merkle trees; one with blocks of ordered transactions one with computational operations driving state changes.

IPLD – Interplantary Linked Data 

IPLD is a common hash-chain format for distributed data structures. This creates a database agnostic path notation.

any hash –> any data format

This enables cryptographic integrity checking and immutable data structures. Some of the properties of this are longterm archival, versioning, and distributed  mutable state.

It shifts the way you think about fetching content. Content addressing is What vs Where.

A “link” as represented as a JSON object is comprised of the link and the link value:

{ “/” :        “ipfs/0x0 “ }

Link Key             Link Value

Other properties of the link can be defined in the json object as well.

What if we want to create a more dynamic link object. This can be achieved by using a merkle-path.

A Merkle is a unix-style path which initially dereferences through a merkle-link and allows access of elements of the referenced node and other nodes transitively.

This means that you can design an object model on top of IPLD that would be specialized for file manipulation and have specific path algorithms to query this model.

This would look like:

/ipfs/0x0/a/b/c/d

The protocol the hash of the linked object and the traversal

Here are some examples of traversals with the the link JSON object.

2017-01-30-01_30_48-specs_ipld-at-master-%c2%b7-ipld_specs2017-01-30-01_32_52-zoomit-zoom-window2017-01-30-01_33_40-zoomit-zoom-window2017-01-30-01_34_50-zoomit-zoom-window2017-01-30-01_36_18-zoomit-zoom-window2017-01-30-01_37_51-zoomit-zoom-window

2017-02-21-06_22_46-zoomit-zoom-window

This is all essentially a JSON structure for traversing files, navigating through the IPLD object, walking along the ipfs hash to pull arbitrary data that is nested in these data structures.

CID: Content Identifier – format for hash-links

Multihash –  multiple cryptographic hashes

Multicodec  – multiple serialization formats

Multibase – multiple base encodings

2017-02-21 06_09_27-Juan Benet_ Enter the Merkle Forest - YouTube

This is very powerful because we can use Content Identifiers to traverse different crytographic protcols: Bitcoin, Ethereum, ZCash, Git.

We could also link from crypto to Salesforce or a relational DB by using content addressing.

The paths between these now disparate systems can be resolved by using this uniform, immutable, distributed file system.dsContent addressing is a true computational advancement in the way that we think about adding and retrieve content on the web. We can take existing databases and use the various parts of the IPFS protocol to build clusters of nodes that are serving content in the form of IPLD structures.

IPLD enables futureproof, immutable, secure graphs of data the essentially are giants Merkle Trees. The advantages of converting a relational database or key value db into a merkle tree are endless.

The data and named links gives the collection of IPFS objects the structure of a Merkle DAG — DAG meaning Directed Acyclic Graph, and Merkle to signify that this is a cryptographically authenticated data structure that uses cryptographic hashes to address content.

This protocol enables an entirely new way to search and retrieve data. It is no longer about where a file is located on a website. It is now about what exact piece of data you are looking for. If I send you an email with a link in it and 30 days later you click the link, how can you be certain that the data you are looking at is the same as what I original sent you? You can’t.

With IPLD you can use content addressing to know for certain that a piece of content has not changed. You can can traverse the IPLD object and seamlessly pick out piece of the data. By using IPLD once that data is locally cached you can use the application offline.

You can work on an application with other peers around you as well using a shared state. But why is this is important for businesses?

Content addressing for companies will ensure a number of open standards. We can now take fully encrypted private networks that are content addressed.

This is a future proof system. Hashes that are currently used in databased can be broken but now we have multi-hashing protocols.

We can build blockchains that use IPLD and libp2p.

The IPLD resolver uses two main pieces; the bitswap and the blocks-service.

Bitswap is transferring blocks and blocks-services is determining what needs to be fetched based on what is currently in the local cache and what needs to be fetched. This prevents duplication and increase efficiencies in the system.

We will be creating a resolver for the enterprise that enables them to take their existing noSQL key value and convert them into giant merkle trees that are interoperable. IPLD is like a global adapter for cryptographic protocols.

2017-02-23 04_34_00-Juan Benet_ Enter the Merkle Forest - YouTube

Creating the Enterprise Forrest

We can create a plug and play system of resolvers for IPLD.

We will resolve the database and run a blockchain in parallel. This blockchain will be built using two protocols that are from the IPFS project: ipld and libp2p

The IPLD Resolver for the enterprise will consist of

.put

.get

.remove

.support.add

.support.rm

We will take any enterprise noSQL database (as long as there is a key value store) and build out the content addressed merkle tree on it.

Content Addressing enterprise content on IPFS-Clusters

Enterprise can consist of 10,000 nodes 50,000 nodes 100,000 nodes and the IPLD object has to be under 1 MB.

That all will be hosting the merkle tree mirror of the relational database.

This can also enable offline operation for the network. Essentially they have their own protocol that is mirroring their on-premise system.

This makes data migration a lot easier or not even necessary in the future. Once the data is content addressed and creates the merkle tree, we can then start traversing the data.

We will also build interfaces that can interact with the IPLD data

IPLD is a format that can enable version control. The resolver will essentially take any database, any enterprise implementation, and convert it into a merkle tree.

We are essentially planting the seeds (product) and watering them (services). Once these trees are all in place that can communicating because they are all using the same data format.

Future Proofing your Business

We are creating distributed, authenticated, hash-linked data structures. IPLD is a common hash-chain format for distributed data structures.

Each node in these merkle trees will have a unique Content Identified – a format for these hash-links.

This is a database agnostic path notation any hash – any data format.

This will have a multihash – multiple cryptographic hashes, multicodec – multiple serialization formats, multibase – multiple base encodings

2017-02-21 06_07_30-Juan Benet_ Enter the Merkle Forest - YouTube

Again, why is the important for businesses? The most important is transparency and security, this is a tamper proof, tamper evident database that can be shared, traversed, replicated, distributed, cached, encrypted and you know now exactly WHAT you are linking to, not where. You know which hash function to verify with. You know what base it is in.

This new protocol enables cryptographic systems to interoperate over a p2p network that serves hash linked data.

IPFS 0.4.5 includes that dag command that can be used to traverse IPLD objects.

Now to write the dag-sql resolver.

Take any existing relational database and you can now traverse that database content addressing.

Content Addressing your database to a cluster of IPFS Cluster nodes on a private encrypted network.

Deterministic head of the cluster then writes new entries.

2017-02-21 06_25_06-Juan Benet_ Enter the Merkle Forest - YouTube

We use the Ethereum network to assign a key pair for your users to leverage the mobile interface. You can sign in via fingerprint or by facial recognition using the Microsoft Cognitive Toolkit. Your database will run in parallel , you will keep your on premise system and have a content addressed. Content Addressing a filesystem or any type of database creates a Merkle DAG. With this Merkle DAG we can format your data in a way that is secure, immutable, tamper proof, futureproof and able to communicate with other underlying network protocols and apllication stacks. We can effectively create a blockchain network out of your exisiting database the runs securely on a cluster of p2p nodes. This is an IPFS-Cluster. Here are the commands that can add / remove CIDS from peers in the cluster:

 

2017-04-30 10_25_12-ipfs_ipfs-cluster_ Collective pinning and composition for IPFS.

2017-05-02 02_29_29-Zoomit Zoom Window

 

 

 

I am planting business merkle dag seeds in the merkle forrest. Patches of these forrest will be able to communicate with other protocols via and hash and in any format.

This is the way that the internet will work going into the future. A purely decentralized web of business trees.

 

2017-04-17 05_10_20-https___protocol.ai_img_projects_logotype_IPLD.svg

2017

We are on the edge of the future – seeing the emergence of technology that we dreamed of years ago; AI powered bots, Holographic GUIs, autonomous vehicles, decentralized application networks. It’s happening fast. There is new form of distributed world consciousness of truth in the internet, it enables people to come to distributed consensus about certain things. Sometimes it is tough to tell which are certain, cryptographic proof of providence of content will become a standard. Ratings and reputation currency creating the convergence of identity. Proliferation of meta-physics in numbers. More ICOs, tokenized application networks, appcoin protocols, zero-proof currency spinoffs, enterprise crypto consortium, and crypto hedge-funds.

This past year I learned:

This is going into the 5th year I have written on my blog. I am at just over 45,000 views all time.

screenshot-2017-01-01-08-51-20

Hit 11,424 this year in total views; not bad, though a slight decrease from 2015.

2016-12-01 16_17_47-Stats ‹ domsteil — WordPress.com.png

2013: A Year in Review

2014

2015

2016:

 

These were some my goals from 2016:

  • Learn AngularJS and ReactJS
  • Learn Microsoft CRM Programming C# .NET
  • Earn Blockchain Technology Patent for Smart Contract Management
  • Build the Dreamforce Story and Demos – (Alphabet type HUGE Conglemerate)
  • Build Salesforce Lightning Components for the Relaunch
  • Learn how to push GIT, anywhere in the world to anyone in the world
  • Build 21.co applications to earn Bitcoin
  • Learn Chinese and Portuguese
  • Build a program with the Tessel.io
  • Start MicroSaas with the BizSpark program at Microsoft
  • Trade Cryptocurrency daily, more ICOs,
  • Finish the Art of War
  • Write a blog post on the state of cryptocurrency
  • More water every day
  • Eat Healthy: Chicken, Vegetables, Almonds, Salmon, Brown Rice, Eggs, Protein Powder, Oatmeal
  • Run and Lift at 6AM at Koret out by 6:45 to get to work by 8

Some of the highlights from 2016:

  • Traveled to London for work and visited Spain (Madrid, Barcelona, and Ibiza) with my Dad for his 60th.
  • Flew to New York for the second time for the Consensus 2016 Hackathon and won the prize for best App on Azure by Microsoft. MicroSaas – Enterprise Smart Contract Platform and Exchange (BizSpark grant).
  • Made it into an article in TechCrunch for Bots and VR at Apttus
  • Started investing in ICOs

Focus in 2017

  • Build Dapps
  • Build with Babel, ReactJS, Redux, GraphQL, and Node
  • Keep building Bots and a VR/AR interface library in Unity
  • Learn about Containers, Docker, Kubernetes
  • Learn how to write python scripts with Tensorflow, Theano and Keras libraries
  • Write 21.co computer python scripts
  • Write 1000 Ethereum Smart Contract Clauses
  • Read Applied Cryptography cover to cover with annotations
  • Keep a written Daily Journal and Physical Calendar
  • Say my autosuggestion every morning
  • Drink more water, run, and read everyday (52 books, 365 miles)
  • Get 100K views on the site

Crypto –

Cryptocurrencies, cryptographic application networks, applied cryptography protocols, application coins, decentralized application networks. Dapps.

Bitcoin is first and last OR Bitcoin is first of many. The price of bitcoin is next to $1000 and the overall market capitalization has hit an all time high:

screenshot-2017-01-01-09-05-35

 

Do we only need one cryptographic state machine for the world similar to one internet to the world? Does segwit, payment channels, teechans and the like round out the Bitcoin network or do Turing complete scripting decentralized state networks aka Ethereum bring us into the next generation of decentralized applications. There is room for both, and the price of both tokens Year over Year seems to think so as well. Despite the Dao hack dropping the price from a high of 19$, down to around $7.50 now, Ethereum is moving fast.

Companies such as Truffle are making Smart Contract development for the Ethereum network much easier. Zeppelin is creating a library for secure and effetive smart contracts, BlockApps is creating the bridge for Enterprise Blockchain deployments and apps, and MetaMask is making your browser Ethereum network compatible via the web3 api.

ZCash – zkSnarks – Zero Knowledge Proofs has a ton of upside; 21 million notes that come with an encrypted memo field, a key that enabled selective disclosure, and ofcourse the blockchain immutable, petrified effect.

Brave and Blockstack are reimagining the internet browser experience.

The finserv platforms players have launched  R3 Corda and also Chain has launched its platform and smart contract language Ivy.

Enterprise is off the races with Microsoft leading the pack; and IBM not far behind banking on hyperledger on blue mix, and AWS with Monax.

consulting firms are just starting on POCs such as Delotties Rubik;

I think this is a fundamental shift in the enterprise back office from on-premise, to cloud, to dapps.

Dapps for the enterprise; or a new UI over decentralized application networks like bitcoin or ethereum.

That is next.

One last thing on ICOs and blockchain tokens; a paper was released on the difference between securities and blockchain tokens which I found interesting. Ideally if doing an ICO or any cryptocurrency the following is something to keep in mind:

  • the token is sold once there is an operational network using the token, or immediately before the token goes live, it its more likely to be purchased with the intention of use rather than profit
  • It is built with all of the technical permissions necessary to actions the networks functions; it is for use and able to be used by an individual buying the token
  • A token which has a specific function that is only available to token holders is more likely to be purchased in order to access that function and less likely to be purchased with an expectation of profit.

The buy in is based on the premise of the usefulness of the underlying function that the token enables on the decentralized network. The value of the token is based on the supply and demand of the underlying function.

AR –

Built AR Hololens CPQ Applications

Jet and Bike configuration inside of Hololens –

Learned how to create a Holo-App

Learned C# classes and  Visual Studio

Learned Unity how to build GameObjects, import GameObjects from Blender. Applying scripts to different game objects and manager objects. How to call methods that looked into the children of certain gameobjects and in doing so was able to create animations and implement visual rendering based on user interaction via voice and gesture taps.

Unity is a very powerful platform to learn, it is very remarkable to essentially anchor an image from a computer into a digital layer in the space around you.

I think Unity and the platform is amazing. You are able to download a 3D model import it into a scene, apply a script, and deploy it the 3D world to interact with.

This learning curve to accomplish this is not too bad. You can start designing and deploying 3D models in a holographic interface. The other cool thing is that they stayed anchored; in that when i put a hologram in my living room, and come back two weeks later my hologram is still there.

It’s thethered to the real world in space; theres context to what you build.

I enjoy working on VR.

I think that setting up the VR and learning how to deploy holograms was one of the hardest but coolest projects I have ever worked on.

Bots –

Build Bots using Howdy.Ai for Slack, wrote the OAuth 2 script

Built bots using Microsoft Botbuilder and attended BotDay

Built out Max… bots on Azure

Integrated LUIS NLP model into the Bot code hosted on bitbucket continuous deployment to Azure web app.

Conversationalist are no doubt a new way to interact with applications, but they also inherently change the way we build software.

Bots are never really a finished piece, there is always something to optimize, tweak, add in; to make the bot more human like to create a better experience for the user.

There is so much friction that is removed in conversational interfaces. The advancements with lingustics and computers is astonishing. I remember creating my first SVG trees using prolog to parse english and japanese sentences. I didn’t realize it then but ultimately that recursive programming was an excellent base line for my understanding of how to create bots that leverage NLP.

The other important thing about getting a bot up and running is that is is being hosted, you have the app id, secret, you have continuous deployment from a bitbucket repo, you have the right size application node, your webservice are leverage keys that do not expire causing latency issues, you are testing locally with an emualtor before pushing changes to the live bot, that you are taking into a fact security, querying the data intelligently, that the bot sounds conversational and you have removed console log statements from the dialect, that you are able to talk to the bot randomly and achieve some outcome, that the bot is available in multiple channels and dynamically can alter the response based on the context, that the bot learns, and you learn, what it is that can make it more efficient, effective, and execute what it is it is being told to do.

Also,

that you have a pricing model for the bot, and you know exactly how much this bot is going to cost you and earn your platform provider in the form of consumption, that it is ultimately going to have to be managed through a lifecycle of improvements, and that during these lifecycles people who need to access the bot are going to come and go and you will need a way to manage what users have access, in addition to if you want to update at a certain time for certain users, how that can be accomplished, and ultimately if you can run some different advanced algothritms through all of the data you are collecting because you are essentially creating a platform, in essence you will need a way to bake in some form of deep nueral net; my guess would be to stay ahead of the market.

 

I am truly excited to begin the next step in my career. I’ll be working on the technology that is fundamentally going to change the way that enterprise companies around the world do business. There is the combination of having a decentralized blockchain network facilitate certain business functions between trustless parties. The decentralized trustless yet collaborative industry. Decentralized Applications that enable certain functions and state on a world computer, a distributed consensus protocol, from an internet browser. There is the concept of having an intelligent agent calling and deploying to these decentralized networks…

 

Happy New Year,

Live the dream.

-Dom

 

 

 

 

How to Build HoloApps

The Microsoft Hololens enables an entirely new immersive experience with computing. There is yet another digital layer that we have the ability to tap into and build upon. It is profound when you deploy a holographic model from your 2D computer to the 3D world.

What makes the hololens different from other types of computing platforms? What if holographic images became ubiquitous? Are holograms anchored to points in the world the future of distributed computing. What about holographic bitcoin nodes? (Maybe a stretch).

Disclaimer: Prior to the last month, (late July/ early August) I had never programmed in C# or with the Unity platform. I could have used Javascript but everything I read said that specifically for this type of development it is best to used C#. Compiled, more powerful etc.

I wrote this article on my phone as I built out the project.

From the start I did not want to think about the end result of seeing something floating in front of me and being able to interact with it in 3D space. Let alone voice activation built on NLP that could connect the Hololens to the REST APIs of my Bot.

Still I was able to get a digital object in front of me. The first being a cube, then a plane, then planes, then a bike, then bikes.

I prefer to think in the different components the HoloApp needs to accomplish the goal of ar cpq (Augmented Reality Configure-Price-Quote).

I know the NLP has to be trained to match the right utterance. I know that Bot has to gather the right parameters to be serialized in a JSON object. I know the scripts that allow me to interact with the Holograms have for the most part already been built. I know that the 3D models from TurboSquid cost money and it will be tough to build a compelling demo with the free ones. I know that the GUIs will need to look great but also be easy to interact with for someone who has never used a Hololens. That the story of configuring a product in the Hololens has to make sense.

The things I dont know about; how to consume our APIs directly, yet, or how to use spacial audio and spacial mapping and incorporate it into the demo.

The Hololens is amazingly beyond what your base ideas of holographic technology is. In a sense any prior exposure to what you would think as the ultinate vr future is limiting. With this in mind, it is important to understand that tapping into this digital layer requires you to think about the expirience differently.

One of the GUIs was all I had to figure out to realize that this new form of computing was the early stages of tech. It is early. Very.

Ironically, it put into perspective that yes we are living in a time of incredible technological advancement, but still, it is very early. The GUIs are surfacing the state of underlying data sources. This could be a record in Salesforce, data from an entity in Dynamics, data from a Smart Contract on the blockchain. The GUI in VR World needs to be diegetic. The cards need to be interactible, by gaze, gesture, speech; they not only look cool but are there to augment the expirience of the user.

It is not easy to type in VR.

This is another reason why bots and VR are a catalyst for each other in that the NLP used in bots will be used in VR experiences.

The other interesting concept in VR is scale. An object’s scale and distance from the main camera makes all the difference. Being able to grasp what distance and scale any object is will make developing for the Hololens much easier.

Another component of this world which I am just getting into is Raycasting. Being able to cast a ray from the main cameras eye to an object should affect the objects state, look, feel in VR. By having the light hit a gameobject witg another certain material applied is what makes the experience unbelievably realistic.

The mesh filter creates a skin or surface and Mesh renderer determines shape color and texture.
Renderers bring the MeshFilter, the Materials and lightning together to show the GameObject on screen.

Animation is next. I need to be able to have voice activated animation drive the configuration of the product.

One of the reasons being that the airtap motion is very confusing at first for users. The best experience would be having an object expand into its components for configuration, having the user select different options, and then validating the configuration bringing the model back together.

The way to achieve this would be to have the game recognize keywords or intents via LUIS, and then have various scripts applied to the game object that make it interactible.

A manager game object.

Animations can be achieved with scripts or with the animator in Unity. I tried understanding the keyframes and curves but still have not figure it out yet.

I need animation to show when a user either hovers over or air taps a selection for it to change the corresponding object in the configuration.

I achieved this using renderer.enabled onGazeEnter. I then moved the 3D box collider corresponding to the different objects out to where the tile was.

I have the select options down, either by voice, air tap or clicker.

I also added in Billboarding to the quote headers so it always faces you as you are configuring the product. I added in sound so the user knows when they make a selection and lastly I am working on getting the 2D images of the options into the tiles in addition to making the tile green upon selection.

Actually pretty tricky, but ill figure it out.

When calling the NLP from within the hololens via direct line REST API, there needs to be an avatar the user can speak to.

This concept of having the audio come from an actual thing gives it a persona. The next couple steps of the demo are creating the cards for every object which will be rendered based on the voice commands given to the avatar. Once this is complete I will need to work on the plane scene. Lastly, next week I will begin on adding our CPQ APIs to the existing bike demo.

32 Days until Dreamforce
After a couple hours I figured out turning the tiles green and replacing original blue tiles on selection by using getComponentInChildren and rendering enabled true or false on airtap.

About 3 weeks left til Dreamforce.

I just started on the configuring the inside of a plane demo. Again, getting the objects to the right scale and the right distance from the camera is key. We now have a CPQ webservice hosted on Azure which we are calling using a Unity WebRequest.

The next I have to do is work on rendering different textures of a GameObject on hover.

Also, do I call the webservice directly with a Unity Web Request or do I hit the Bots API which then calls the WebService using the Unity WebRequest. Probably the latter.

Other than that, its now a matter of just dialing everything in; I have the right assets, digital and physical, the scripts are there, its time to put everything together.

Ok, so actually it was the first Option. I called the CPQ Webservice using a UnityWebRequest and a Coroutine OnStart to Create a QuoteId and CartId and retrieve the options for the bike bundle.

I put this C# script on the Manager object. The other WebServices such as add option, remove option will probably remain on the gesture handler script. Sinilarly a CoRoutine is called OnAirTapped.

Tomorrow I have to work on parsing the JSON response from the web service and bind the option Ids to the gameobjects in Unity.

Once this is done then I will create the action to call the  finalize web service. After that, well, thats when we start to get into it. Makin it POP. Then ofcourse running into problems x,y, and z but ultimately heading in the right direction.

I ran into a few serialization errors, to say the least. After hours of testing different scenes, debugging, I have a stable project back.

The next part I have to develop is the game loop and surfacing the data in the GUI. The game loop takes an index of scenes compiled and directs between them. Advancing levels in a game, same concept. The 0 level I need to finish is a switch case where if bike A is selected go to scene A, if bike B, scene B, C, scene C.

The three bikes will be rotating and upon selection it will LoadApplication (scene#).

The other game loop will be to Finalize the configuration. Upon the user saying “Finalize”, the KeywordManager will set the Finalize gameobject to active. I have an if statement in the gameObject that is this.SetActive (true) then StartCoroutine Finalize and LoadApplication (0).

Possibly some other cool stuff such a sounds and showing the quote etc. before going back to the selection scene.

The Plane is still a work in progress. It is a different experience because of the small field of vision of the Hololens. It is still very cool but I am working on a solution to make it more of an AR expirience vs VR. Overall, both projects have come a long way. Leveraging the Holotoolkit, exporting Unity Packages, googling and reading books on C# and Unity all have had huge impact.

There is about 10 days until dreamforce and the last 10% of any project like this is the toughest.

It should come down to the wire, it always does.

Last night I might have figured out a powerful and reusable way to build Augmented Reality components. The Get Renderer.enable can be applied for any sort of dynamic action. Here are my notes:

var x = GetComponentInChildren <Renderer> {}

x.gameobject.renderer.enabled = true

x.gameobject.renderer.enabled = false

everything is a game object.

overlay the options for the bike selections and on selection render.enable = true for the corresponding text.

Essentially by having the different object within a parent rendered based on conditions being true or false in the parent can tie together the children components i.e a change of color and text showing up.

I still have some testing to do but I think this may enable a lot.

I have roughly a week to dial everything in. The plane will be today. Working on rendering different textures. Other than that it is moving the colliders for the other bikes and creating a bike select script.

I have about 3 days left and the last week has been a gamechanger. With some help(would not have been possible without them), I was able to build and deploy the last parts of the project.

Item 1) Serilaization With Newtonsoft on UWP takes a little toying with. Download the package and a portable path and download a seperate dll I will attach here.

Net: major part of the project.

Binding data to the GUI dynamically. By getting the GameObject and setting the material.

This was also huge and done with code below:

The Biggest thing just figured out and this was for the plane originally but applies for the bike was being able to for each mesh in parent render.

Today I built the plane scene, added keywords so the options can be said.

Still have to bind the quote number and updated price.

I worked two months day and night to be able to deliver tomorrow.

There is no doubt in my mind this was one of the biggest challenges I have ever had. From learning Unity, learning C#, learning how to Call Apttus APIs, importing 3D models and scaling and texturing them, solutioning the build, designing the GUIs, making it voice activated; overall yes, it was very difficult.

There were two things I will need to figure out with the build. How to render different materials of a GameObject On Tap… but I have… just thinking on it still.

And how to deselect an option in a group before selecting another.

Other than that, yeah, two three months of work. Never thought I’d work on that.

Enterprise Augmented Reality, Apollo.

Top Tips:

  • Deploy your apps to the Hololens using the USB cord. I was using WiFi for the two months of development and on the last day realized deployment took 30 seconds with a USB. (Depending on the size of the app, it could take 15 -20 minutes over WiFi).
  • TurboSquid –> Blender –> Unity
  • Leverage the Unity Asset Store
  • Export your Asset and project Settings / Create External Packages
  • Final 10% is the toughest

Learn:

  • Shaders
  • Storyboard it out
  • Create Prefabs
  • Export Unity
  • Additive vs Subtractive Color
  • Vector3
  • Quaternion
  • Mathf
  • Everything is a GameObject
  • Declaration before Definition

 

Sources:

Microsoft HoloAcademy

Unity Docs

Intro to Game Design, Prototyping, and Development  

 

 

on ipfs:

https://gateway.ipfs.io/ipfs/QmbPk95nDCk4MBoidQHxs2zuZ44TemH7YjA4E3nCBZBeoK

Chatbots

CryptoSlackBot: n. A bot that executes buy and sell orders on cryptocurrency exchanges.

Aptbot: n. A bot that executes Quote-to-Cash functions from within messaging platforms.

Do you want to build your own slack bot, universal bot, facebook messenger bot? Eventually list it as an App and publish the bot to a directory?

Over the past 5 months I have developed bots for: Slack, Skype, and mobile web.

There is a common pattern when building these bots: dialog triggers, parameter gathering, message brokering, and response formatting. In this article I will explain:

  1. How to Build a Bot
  2. How to connect the Bot with any REST API
  3. How to host and deploy the Bot
  4. How to create a Bot API

How to Build a Bot

These are the tools and languages you are going to use to build the bot:

When building a bot it is important to understand that you are abstracting away the UI with intelligent cognitive services. There is an abundance of web based services and applications that are locked into their own interfaces. We have arrived at a time where often we interact with apps without a UI. Why do I need a phone to call an Uber, can’t I just tell the uber bot in a Slack channel to pick me up from work and drop me off at the train. The bot needs two parameters: from and to.

This new paradigm of having intelligent assistants is deemed conversation-as-a-service.

The front-end is the conversations that you will have with the Bot to gather the paramaters that are passed to your server in the form of a serialized JSON object. The server deserializes it, and responds back to your bot with its own serialized JSON object.

The user interacts with the bot –> formats the request with the parameters –> the API retrieves the call data –> responds with error/data –> formats the response to the user.

When gathering the parameters with a bot you need to initiate the conversation or dialog. To start a conversation or dialog you include an array of words associated with a particular conversation. When the bot hear to execute a program, this is exactly triggers the specific dialogue, and ultimately what drives the JSON request to the API service.

You can also combine Natural Language Processing with your bot to make it more intelligent using a service like LUIS AI. Take an utterance and match it to a particular intent which is tied to a set of questions (the dialog).

Once this is achieved, the bot will store and pass this as a request in a serialized JSON object, the server deserializes it and responds back with callback data.

Everything in Javascript is an object.

Everything, in Javascript, is an object.

URL Hacking | Passing of Parameters

Once you pass the various parameters needed to the API, it responds back to your Bot at which point you can drive another action.

Essentially you are pulling in the functionality of any web service into a command line where you have interactive programs branching and executing various functions.

Let’s Start Building

There are a number of company’s that enable you to build a bot:

  • Howdy.Ai’s Botkit
  • Microsoft Universal Botbuilder
  • API.AI

In order to get one of these up an running you are going to need to execute the following code.

Botkit for a Slack Bot

In this turorial I am using Botkit.

cd c:
mkdir newbot
cd newbot

npm install botkit
npm install superagent
npm install body-parser
npm install express

npm init

git init
git commit "first commit"
git push -u origin master

git add .
git heroku:set -a <appname>
git push heroku master

Let’s break it down:

The first thing we are going to do is create a new folder for the bot application:

cd c: 
mkdir newbot
cd newbot

Once you have your new folder you are going to create an app on Heroku.

Then you need to install the various NPM packages that you need to get the bot up and running.

Howdy.ai’s Botkit is a great way to customize and get a bot up and running for a variety of different messenger platforms.

npm install botkit
npm install body-parser
npm install express
npm install superagent

Express will be used for the server and body-parser will be used to format the hypercards in the platform channels.

npm init will create your app’s JSON package.

npm init

Once you have your application setup the next step is to deploy it to Heroku.

Make sure you setup a Procfile so that your bot can “live” on the server.

Create a Procfile in the directory with the text worker: node server.js

Now that you have the base bot app setup you can begin deploying the code to Github.

Create the repo on Github and then set the remote in your directory.

git init
git commit "first commit"
git push -u origin master

 

Connecting Bots to a REST API

Download the npm module of whatever rest api or wrapper you want to bot to connect with. Check out my github page to see examples.

Deploying to Heroku

It is important to configure the vars in your Heroku Application so that they are not explicitly declared in your code.

The variables needed are your Slack Apps Client ID and Client Secret.

git add .
git heroku:set -a <appname>
git push heroku master

Deploying to Azure

It is important to configure the vars in your Azure Application so that they are not explicitly declared in your code. You also want to register your app on the bot framework you can connect to various platforms/services.

The variables needed are your Microsoft App ID and Microsoft Password.

Create an Azure account and create a new Web App. On the Bot Framework portal, set your website and app id/password.

If your web app is up and running and your bot is deployed you ahould be able to interact with it on your site.

OAUTH2 and Slack

Instead of just having a custom integration bot, what about creating an actual Slack app?

This is arguably the trickiest part of getting your app up and running.

You want to enable other Slack teams to use your bot in a secure way.

Slack Authenticate’s users by exchanging an authentication token for a bot token.

In doing so it gives your app access to the org with the different access scope.

This can be for bots, /commands, and webhooks user access.

The key is creating the 1) response http request 2) then having the response 3) redirect to the /x http request 4) which contains the oAuth 5) and grant’s access.

One way to test is using ngrok and pointing the request URL to your local server which has the public reachable IP through grok.

Enter the Grok ID on your App ID in Slack, in the interactive buttons, and the URLs.

start up ngrok

ngrok http 8080

then start up your app locally

node server.js

then inspect the ngrok requests

ngrok http 4040

lastly, go to localserver in the browser.

In Conclusion

The bot logic can be used across multiple platforms in that there can be multiple inputs from any messenging platform/sensors/Holograms; the bot is the broker of these inputs to other platforms. When building keep this in mind. The bot is and will continue to be a catalyst for web services to be brought into single platforms. The new OS is immersive in that we have untapped, disconnected mediums that can now leverage the same bot logic and state.

Next part will be on how to create your Bot’s own API.

CryptoTrends

This article is about what hasn’t been in the headlines in the crypto space.

By crypto space I mean: cryptographic currencies, decentralized ledgers, distributed consensus protocols and thee / a blockchain protocol.

It is fast pace, riveting, real-time, global; it never sleeps.

From the highly anticipated Halvening, to the Rise of Ethereum and the Fall of the DAO, to the next big anticipated ICO; this year has arguably been the most exciting for this global phenomenon.

I haven’t posted in awhile, it still feels like the wild west times of cryptographic state transition machines. Lots has changed but governments still do not know how to classify it, exchanges have trouble securing it, people still can’t get enough of it; a true computational advancement growing before our eyes into a new global digital layer.

You have computing nodes around the world opting in to share the concurrent state of ownership of digital entities coupled with an arms race of specialized chips consuming immense amounts of energy incentivizing the search for a mathematical proof that secures and enables a robust cryptographic monetary system.

A truly born global and open enabler of immediate value transfer.

Clearing and settlement that relies purely on the underlying cryptographic stacking of blocks full, and I mean completely full, of digital cryptographic signatures.

On the Surface

Someone can securely hold digital entities of value, they can be exchanged and used anywhere in the world, there is no one that tells them how much they can send or spend; the rules are written in computer code.

One can engage in instant digital arbitrage and use a physical proxy for food at a restaurant or a means of transportation. This is the current reality we live in, or in other words:

There is a digital layer that not everyone in the world knows about. This layer is operated in p2p cryptography. It has already arrived, but is not evenly distributed yet.

One could leverage this layer in reality if they know the way to access it:

One way is to use Shapeshift to exchange one alt for another, set the withdraw address to a web wallet, send the alt from a web wallet a global digital exchange, sell for Bitcoin, withdraw to a wallet associated to a Bitcoin debit card then go to the nearest ATM for cash effectivo.

This could take maybe 15-20 minutes.

In real time using a variety of mediums, pure decentralized digital arbitrage. A seamless flow of taking a non-tangible digital unspent transaction output and turning it into cash.

At any point, regardless of location in the entire world, you can buy, sell, short, swap, any digital entity of any magnitude via a computer in your pocket and then go and use it IRL.

It’s unfathomable.

What other asset class can you trade digitally and globally at any time then immediately exchange for cash? That has this type of volitility?

Now if buying dinner or booking a flight is the first step, this is only going to become more and more prevalent as more asset classes are put on the blockchain. If I can spend coins on the blockchain, why can’t I earn coins on the blockchain?

On the Machine Layer

Machines do not need a UI in order to interact with the blockchain.

You don’t have to be there for a machine to interact with your endpoint.

If right now I can do all of the above, trade, sell, spend, crypto, pull out money, then I should have just an easy a time earning the above via some digital mechanism.

This is what the 21 computer enables – one’s machines to earn bitcoin per http request.

This still really doesn’t encompass the profound implications of being able to directly interface with a global value transfer network at the command line or by writing a consumable API. Tapping into the network should be easy, creating a closed loop once in should be even easier.

Something I thought is fitting is it comes down to input vs output. The analogy is equivalent to one Elon Musk mentionend recently in that as humans our input sensors are incredible, however the output is very inefficicent ( I have been typing on the train as fast as I can for the last half hour and I am barely at 800 words). It’s even worse when we are on a phone. We have two thumbs. Two measly thumbs to try and explain. Nueral lace is a way’s out but I think being able to incentivise machines for output is a step in a indirect but somehow related path.

I need to make it more of a habit to code in Python on my 21.co computer.

__init__ method   | Flask 

Bots

I haven’t been writing as much because I have been building bots. I have been using Howdy.ai’s BotKit to create a number of Slack bots.

Following tutorials online and eventually creating the conversations handlers, and connecting them to various APIs using a combination of npm packages, node.js, and heroku.

The platforms that are driving the new shifters Slack, Facebook Messenger, Microsoft. The big differentiator is that Slack was made enterprise team first. They have the most momentum and a great indicator of this is their growing appexchange.

As an operating system for teams of any size, there is a huge upside to bringing in the various tools one would use on a daily basis into one place.

Being able to leverage /commands, Webhooks, and bots will enable new levels of productivity and ultimately drive business outcomes.

I will have a post soon on how to build these integrations to Slack.

The next bots I make will be with Microsoft botbuilder and Azure.

OAuth

serialize JSON, deserialize.

Callbacks

ICO’S and CryptoArbitrage

There has been a recent increase in the number of Initial Coin Offerings.

Lisk was a solid one. Despite the DDOS on the web wallet at launch, overall the coin has given strong returns, has gotten some great backing, and a has accumulated a relatively high market capitalization.

There have been a few other ones ICOs recently that I have not participated in. The big thing here is your looking at the the tech, the team, are devs going to build on the platform, what is the main differentiator, what exchanges are going to list it and last but definitely not least; follow the whales and traders on Twitter.

There is no question whether or not you can profit off Alts, you’re hedging a quick come up versus a platform and community that contributes to growth and continued innovation.

Don’t Just Hold

From an immediate coin flipping for profit point of view I would make sure that you have a few different mediums you could use to exchange the altcoin into BTC in a relatively fast way, within at least an hour. Confirmation times and amounts do vary by exchange and by the level of identity verification you have on the exchange but ultimately you want to make sure that if you are going to be investing in an alt that you know that it can be converted back into Bitcoin with out jumping through too many hoops.

When making an exchange in the crypto world it comes down to liquidity and security. If you want to you can completely trade via your phone, log into your various wallets via web browsers, and refresh and count blocks until your money has moved. Not really recommended for various security reasons but it does work.On the flip side you can get a device for an offline wallet or create, print, and send a multisig paperwallet.

Just a side note I would make sure when trading that you send a little bit first to an address verify it got there and then send the rest, just good practice. On the flip side if you are cashing out from bitcoin to a bank account make sure that the card / account your are withdrawing the money to is not closed or inactive.

Definitely keep an eye on:

coinmarketcap.com

WhaleClub, twitter feeds

check what time it is in China, NY, and SF.

On the Protocol

I often forget that the code that operates all of this can be directly interfaced with. Actually sending the raw transaction data.

Actually piecing together EVM assembly.

Review the code of the protocol. In a sense if you find a bug the bounty could be millions.

On Nodes

The number of nodes that keep a copy of the blockchain needs to be increased. This also means that the number of nodes running the same version of a protocol needs to be increased as well. A network of Raspberry Pis can create a interactable global digital layer if you know how to directly interface with the Bitcoin Protocol or a high level level language that does so.

I don’t think that Ethereum should soft fork or hard fork, disclaimer: did not throw DAOn.

Conclusion

Overall I think that there is a bigger trend in play that is the same way that everyone learned typing, cryptography will be the same way. Everyone will be learning / using cryptography.

Think about that you have this entirely new layer that not that much has been built on yet.
Look around, nothing really is running on the blockchain yet.
Think about what the Internet as a communication layer between people looked like.
What if cars ran on a P2P blockchain mesh network. Each stored a copy of the blockchain.
Would it make sense for cars to be connected via a blockchain layer vs an Internet layer?
Or better yet is a easier for to achieve the outcome by calling directly it from a blockchain?

 

More Research on:

Seg Witness

OP Return

Recursive Bots and NLP

Questions:

A big question was whether or not Bitcoin was going to be the first or the last cryptocurrency. It’s definitely not the last, but will the alts stick?

Which of the big 5 tech companies will make the first major Bitcoin acquisition?