Making Digital Transformations Happen

My name is Jonathan Tonge.
I create businesses that help organizations target, communicate and engage with their customers.
I create digital media that helps people visualize, interact and learn.

Videos
Copyright 2017. Ford GT used with permission from Ford / Team Detroit. UNIVERSE Inc.

My Journey as an Entrepreneur since 2011

October 25, 2016

In 2010 I was leading digital integration and product management at one of Canada's largest media companies. There we were charged with transitioning the company over to digital revenue. I really liked the leadership, people and our mission. However being an employee is similar to being in a zoo. You are provided with work and money, but in a structured environment you do not have the ability to independently conceive and build. This is a path that some creators need to take to find out who they are and what they are capable of, no matter where they end up down the road. I resigned from a great position to find my place in the wild.

SilverBuyer

SilverBuyer was the first business that I launched as a full-time entrepreneur. I got in when silver was still cheap and with no money invested I rode the bubble. Within six months, I was consistently earning over $13,000 per week and growing 25% weekly. Sounds like a banner ad for some scam, but it was true and I was considering franchising it. Unfortunately the silver bubble popped. With my bank topped up, I decided immediately to move on to my next project.

One of the greatest lessons from SilverBuyer was to look for channels that are less competitive. I found ways to find and buy silver beyond the website. That's where I made most of my money.

STORYFIRE Inc. (SF)

StoryFire was in the makings as early as 2012, but we didn't get it going full-time until 2013. StoryFire was a creative video agency that I designed with two other founders. We worked on a number of amazing projects. Some highlights include winning a competitive RFP process with Tourism Toronto and University of Toronto for creative video needs for World Pride.

UNIVERSE Inc.

After StoryFire I was sure that while co-founders can be great, they are also a big reason why companies fail. My goal was to be in digital technology anyways and this time I would be a single founder. This is usually not recommended because as Paul Graham says "a tech business is too much work". I had also read the Startup Genome Report which reported that it takes 3.6x longer on average to reach scale.

As a strategic move, I had been trying to get space at Communitech to found my next business. When I heard BlackBerry was taking applications for a couple startups to receive space where their BlackBerry Jams' were once held, I was stoked. I made my interest known and got in! There next to BlackBerry, Google, Desire2Learn, Christie Digital and many other leading brands, UNIVERSE was founded.

Around the same time I had applied for VP of Marketing at a rapidly growing home builder. I made it a few interviews in, but I should have prepared more for the final interview. A few weeks later I booked a meeting with the CEO who I had previously interviewed with. In the meeting he reviewed the demos and provided insight. I think as an entrepreneur himself, he appreciated the fact that I had the guts to turn this into a sales opportunity.

We aced that first project and then took the work and sold four other builders, including a Fortune-1000 company who is the largest luxury home builder in the world. Working with their corporate architecture team in Pennsylvania, we completed our first project for a development in Palm Springs, California. Our primary contact there reported that they were the best 3D interiors they've had. It's quite a complicated process to model new homes and developments because of the exact requirements including overall style, custom furniture, finishings, fixtures and surrounding environments. But in order to innovate and create new media around 3D, you need to own the 3D process. In January of 2016 I developed interactive 3D floor and site plans. We also created high-quality photorealistic exterior stills to demo, which turned out amazing. At this point I thought for sure that we'd done it. The demos brought in requests for proposals worth over a million dollars in the first half of 2016 alone.

MapSpot.com

I have been an advocate for using interactive vectors on the web for many years now. I remember when you zoomed a vector in Google Chrome, it would flicker beyond its container. I actually worked virtually with Paul Irish there to resimulate the issue to get it fixed. Not alot of people had used vectors in powerful ways, let alone on the web.

MapSpot is an early-stage framework for making vectors interactive for next generation web interfaces. This technology can be leveraged to create interactive site plans for new homes, so it has synergy with what I am doing.

In mid-2014 I reached out to John Tory's team to create an interactive map of his proposed $8 billion light rail line called SmartTrack. I like John Tory because he was CEO of Rogers and that fits in with my media background. I found his team to be very professional, honest, trusting and supportive. What a positive change for Toronto! After getting a work order I spent three weeks designing, coding and launching the map to the media as part of a well publicized SmartTrack campaign.

SmartTrack became known as one of the most successfully-marketed capital project plans in recent election history. An Innovation Fellow for the UN used SmartTracker as the primary example of how transit innovation helped John Tory win the Toronto mayoral election. We talked about what was important to Torontonians which is transit times. If you don't know, Toronto has some of the longest commute times in the world at 65.6-80 minutes seriously impacting our quality of life. To add to the congestion, both Lakeshore and Gardiner Expressway were closed that year so marketing with transit was a sure bet.

The Future of Droids and AI

January 18, 2017

We are on the cusp of a droid takeover. As scary and dangerous as it might seem, it is inevitable unless something changes. In December the Amazon Web Services team opened up Lex and Rekognition for audio and visual recognition to developers. This year a number of manufacturers will be offering low-cost robotic arms that are sensitive to touch. While these products will see tremendous improvements and exponential cost reductions over the next few decades, I believe that 2017 will mark the first year the AI revolution gains momentum. So I want to share my thoughts on what the future droid might look like and how I think success can be achieved in this new market.

We can benefit by looking to the past. VHS, Android, DVD, Blu-Ray, IBM and Windows show us the way. The key for success was democratizing the technology to allow manufacturers and developers to integrate their products on a flexible framework. The winning strategy might come from controlling the operating system, middleware and key applications, versus monopolizing on the hardware or any specific software.

Let's consider that we want to have a droid cook for us in our homes. The minimum physical and non-physical components required to achieve this include:

  • Robotic Arms
  • Hand Attachments
  • Sensors (visual, audio, temperature, taste, capacitive, photoelectric, etc)
  • Mobility (torso & legs, base with wheels & linear actuator, etc)
  • Motherboard
  • Battery
  • Wi-Fi, Memory and other hardware
  • Intelligent Appliances & Peripherals
  • Operating System

The ecosystem might have millions of manufacturers and developers each creating specialized products to add to the framework. This could include companies like KitchenAid making compatible pots and attachments that are safe for robotic use, to companies like Delta that might manufacture faucets that turn on and off through wireless communication. Further ecosystem integrations include ordering groceries from services like Amazon or Grocery Gateway, recipe and diet integrations, to specific droid-friendly ingredients that are standardized and recognized by the system.

It makes sense that droids will process information in the cloud. It is far less competitive and diminishes the strategic advantage if the droid does its processing in situ. It not only raises fixed costs and reduces potential processing power, but limits the ability to continuously improve the droid experience after purchase by upgrading and learning from its data. The long-term vision for such a system will be for the OS to rewrite and improve its own code at a rate that far exceeds human capabilities. That vision starts with the collection of data. After data is processed in the cloud the operating system would return a set of actions. The cloud-half of the operating system would need to manage:

  • General Cloud Computing
  • Visual Recognition
  • Audio Recognition
  • Touch Recognition
  • Taste Recognition
  • General Storage
  • Database
  • Software & Drivers

The primary purpose of the droid operating system would be to bridge the gap between the robotic hardware and the processing in cloud. The type of data that needs to be securely handled by the OS might include:

  • Location
  • Direction
  • Elevation
  • Arm Positions
  • Hand Tools
  • Camera Data
  • Audio Data
  • Sensor Data
  • Connected IoT Appliance Data

This data might be transmitted in JSON format along with compressed stereoscopic images, video and sound. The operating system in the cloud would receive such data, process it and return recommended actions. The robot would decode and delegate those actions. This process could repeat several times a second.

The success of artificial intelligence and robotics will likely depend on innovation generated by millions of companies and startups. Not one company doing everything. I'm sure an 'Apple' version will materialize, but the key success factor is in the operating system and partnerships. This is a multi-trillion dollar market that is about to unfold. It's much larger than any one company. So the question is whether you are one of the many in the droid ecosystem, such as a droid build-to-order service, application developer, hardware manufacturer or integrated services supplier, or the one and only operating system?

The UNIVERSE where E=MT2

November 2, 2016

Copyright 2016 UNIV-ERSE Inc. All rights reserved.

Would you agree that what you sense is "when matter released energy?" Think about it. You are seeing and feeling energy. Energy released by matter. You can't see the matter. You can only sense the energy it releases. As time progresses, what's around you will evolve due to the interaction of matter and energy. However the sensory experience of this world will be entirely made up of energy.

We sense free energy like photons in time after they are released from matter. But how do we know light is travelling across a distance in the same way as we think of matter travelling? Noone has ever measured energy or matter on its own. There could be a reason for that. Only matter can detect waves of energy. Only energy can detect matter. Do the two only exist together?

What if light does not travel? That is saying 'the speed of light' is grossly invalid. What if the distance between two objects is time? Distance = Time. Something that is far away from you is in a different time. Rather than light 'travelling', time passes. 300,000 km is one second. E does not equal MC2. E = MT2 where T is the constant time of the universe. Energy, matter and time, all in one formula. The only difference is rather than per second, it's 300,000 km equals one second. Since we'd also have to redefine the newton and joule to reflect this new assumption, the equation would hold.

Without energy, you wouldn't sense matter. Without matter, you wouldn't sense energy. Without time, you wouldn't sense either. Energy, matter and time are interdependent and only exist together.

Thinking of the universe in these terms produces some further thoughts:

  • Matter is bound to time, but seeks to destroy it
  • Free energy is not bound to time, but seeks to create it
  • When matter captures energy, time is created
  • The Big Bang was the start of time
  • All energy still exists at a single point since it is not bound to time
  • Seconds of Time =  joules / kgs   300,000,000

Einstein said that light is everywhere in the universe all at once. If all energy is at a single point than it makes sense. It also explains why light, even when emitted from a moving object, does not travel faster than 300K per second. That's because it's not travelling. Time is passing and time is a constant. Perhaps this is also why the universe is expanding. Time is expanding. Time as distance also explains the so-called "singularity" before the Big Bang.

As humans we think of distance as the physical measurement between matter, versus the time between us. Is not a distant galaxy far away because it takes a long time to get there? We even refer to it as light years! Have we underestimated time as just some fourth dimension, when in truth the first, second and third dimensions are all time as well? E=MT2 makes sense.

Think about it once more. You've never seen matter. All you've ever seen and felt was energy. Energy at a point in time. Energy released by matter. Would you agree to each of those? They exist together. Yet you walk around everday thinking it's all just matter and distance is only meters and feet.

We've made assumptions that work in our society but result in putting blinders on us scientifically and philosophically. Then we apply the concept of speed to free energy, even though it doesn't make sense. Only when energy is captured by matter at a point in time does speed apply. Or in other words, only in the fourth dimension does speed apply. The space between us all may in fact be time.

Decisions

Life is the only matter we are aware of that can harness energy so it can make decisions over time. Is that a coincidence? Energy, Matter and Time define life. The three together create reality. Indeed E=MT2 would be a game changer for religion. What I love about this theoretical time universe is that it points out exactly why life is very special. The universe is a reality to make decisions in.

"It's not where you are or what you have. It's the decisions you make that count."

HTML5: Zoom and Pan with Transform-Origin

October 21, 2016

A little over 2 years ago Chris Coyier from the world's most popular web blog CSS-Tricks.com offered to pay me to write a guest article on how to use transform scale to zoom an image and transform-origin to pan it. I was honored but at the time I had two business partners and was focused on a new business. I thought I'd share this technique as zoom functionality is important to many websites and who doesn't want simplicity and hardware acceleration for buttery smooth animations? Warning. There is some basic JavaScript, CSS and HTML involved.


The code itself might seem long for this forum and instead of going through it line by line, I'll tell you how it works so you can code it up on your end. My plan is to get it up on GitHub so it can be polished and incorporated into other libraries.

The HTML is very simple. <div><img src="image.png"></div> The div has overflow hidden set, so when you use scale the image, it enlarges without expanding beyond its container.

After being zoomed, we need to be able to pan the image left to right and up and down so we can view the area that we are interested in. Transform-origin specifies to the browser at what point in that image do we scale from. If your origin is set to 50% 50%, you are scaling the image from the center. However if your origin is set to 100% 100%, than you are scaling your image from the bottom right corner. With that principle in mind, you can see that by adjusting the origin to mouse or touch movements, you can use it to pan or swipe an image in any direction.

So now you just need to figure out how to calculate the origin based on input events. To do this you take the X and Y changes in the mouse or touch event and then change the transform origin accordingly. You can do a 1 to 1 where the object pans at the same rate as the touch event, or often I like to save users time by amplifying their movements about 2 to 1. move_x = ( ( distX / ( screen_width * 0.8 ) ) / curScale ) * 200. Limit transform origin to 0 to 100% and your image won't go out of bounds.

One of the beauties of using transform-origin to pan an image is that you don't have to make any calculations when zooming. It will continue to zoom in on the exact point that is in focus without any further calculations.