tomasz-frankowski-198764-unsplash.jpg
July 17, 2020

IoT & AI AT THE EDGE & COMPUTER VISION SUMMIT

Presented By

IOT & AI AT THE EDGE & COMPUTER VISION SUMMIT

In recent years, we have seen the confluence of some big technology trends such as the Internet of Things (IoT), deploying AI on devices rather than on the cloud, the rapid increase in compute power, and the advances in accuracy and speed of computer vision models. This is creating an ecosystem which can deliver powerful business applications that were hitherto unthinkable. At the VB Transform IoT/AI at the Edge & Computer Vision Summit, we discuss various topics such as ensuring greater user privacy, enabling lower latency, enabling better search and personalization, enabling and accelerating automation, delivering real-time intelligence, etc.

In implementing these AI technologies, organizations need to also focus on some of the big picture concerns around security, privacy, governance, accuracy, explainability, and eliminating biases such as gender, race, etc.

Hear from industry leaders who will talk about their journeys and learnings in implementing these technologies, how they unlocked value/ROI from them, and their thoughts about what the future holds.

DAY 3 Agenda

Friday

July 17th

9:00 AM –

9:15 AM PT

Main Stage

Welcome from Emcee 

Opening remarks from Dean Takahashi, Lead Writer, GamesBeat

Opening remarks from Maribel Lopez, Founder & Principal Analyst, Lopez Research

9:15 AM –

9:40 AM PT

Main Stage

FIRESIDE CHAT Bringing the Power of the Data Center to IoT & Edge AI

Giant industries such as manufacturing, retail, healthcare, transportation and more are harnessing the power of AI to improve their competitiveness, responsiveness and efficiency. The majority of these AI applications are running in the cloud and data center today.

 

Trillions of devices and sensors used in these industries are continuously generating data, using AI for greater insights. In some cases it’s not practical to send all of the data to the cloud (or core data center) due to high cost, security and latency constraints.

 

To deliver data center performance at the edge, we need high compute power and

infrastructure that supports low latency and security requirements. In this keynote session, NVIDIA’s VP & GM of Embedded & Edge Computing, Deepu Talla, shares use cases, insights and best practices on the right strategy, platform, tools, and ecosystem needed to make this transformation a reality.

Deepu Talla, Vice President and General Manager of Embedded and Edge Computing, NVIDIA 

In conversation with Maribel Lopez, Founder & Principal Analyst, Lopez Research

9:40 AM –

10:05 AM PT

Main Stage

FIRESIDE CHAT From Pin to Purchase: Lessons from Pinterest on leveraging computer vision and e-commerce at scale to create inspirational shopping journeys

At Pinterest, the commerce engineering opportunity was to allow users to go to “purchase” from any Pinterest pin. Pinterest leveraged AI to increase the number of shoppable product pins by 2.5x, while driving a total increase in traffic to retailers by 2.3x. And, the number of pinners engaging with shopping increased 44% year over year.

 

To achieve this goal of making the platform shoppable, Pinterest used various AI technologies including computer vision-powered visual search, machine learning-powered recommendations, and AR. It also partnered with the e-commerce platform Shopify to make Pinterest catalogs easily accessible to Shopify's 1M+ merchants through a custom app. Recent updates also include the ability to shop from a board or search with recommendations based on trends and personal taste, visual search right on Pins to "shop similar" in-stock products, and style guides that allow for browsing across popular styles in home decor.  SVP of Technology Jeremy King will share details on how Pinterest did these things, and the organizational and operational changes that were needed. 

Jeremy King, SVP of TechnologyPinterest

In conversation with Nicole Alexander, Professor Marketing & Technology, NYU & Former SVP, Chief Innovation Expert, Ipsos

10:05 AM – 10:30 AM PT

Main Stage

FIRESIDE CHAT How Intel's technology is enabling the push towards defect-free factories

Imagine a factory whereby 99.9% of the manufacturing defects are automatically positively identified before leaving the manufacturing line. Intel has enabled progress towards this vision in multiple factory deployments by focusing on world class AI running at the edge.  Intel has been enabling meaningful gains in manufacturing efficiency and across numerous factories globally by making it easier and more cost-effective to deploy AI solutions on manufacturing data streams from the edge to the cloud. In this talk, Intel will walk through that data journey and show how an advanced data analytics pipeline combined with AI algorithms has helped solve some of our customers toughest problems.

Brian McCarson, VP & Sr. Principal Engineer, Internet of Things Group (IOTG), Intel  

In conversation with Matt Marshall, Founder & CEOVentureBeat

10:30 AM –

10:45 AM PT

Virtual Networking & Break  Stand up, stretch, grab a coffee or tea and connect with speakers and attendees virtually 

10:30 AM –

11:15 AM PT

(Invite Only)

Executive Forum Roundtable – How Manufacturing companies are making products faster, better and cheaper leveraging AI technologies such as robotics, computer vision and IoT at the edge.

10:45 AM –

11:10 AM PT

Main Stage

FIRESIDE CHAT How Uber is leveraging AI on the edge and computer vision to deliver a better product to its users with greater privacy and lower latency

Uber is arguably one of the most experienced end-user companies when it comes to using AI, including having built its own AI engine, Michelangelo, and being at the forefront of autonomous vehicle technology. In this fireside chat, Zoubin Ghahramani, the Chief Scientist of Uber will talk about lessons learned in Uber’s journey of implementing AI and share his thoughts on how companies can deliver game-changing AI using some of the latest technologies such as IoT, AI at the Edge, & Computer vision. He will also share some of the concerns they need to keep in mind such as ensuring even greater user privacy and delivering lower latency to make the AI seamless with the real world.

Zoubin Ghahramani, Chief Scientist, Uber

In conversation with Maribel Lopez, Founder & Principal Analyst, Lopez Research

11:10 AM –

11:35 AM PT

Main Stage

FIRESIDE CHAT How eBay’s in-house, AI platform ‘Krylov’ leverages Computer vision and similarity search to provide the user a more personalized and better shopping experience

Ebay’s team culture of innovation has enabled it to overhaul and modernize its AI and infrastructure over the past several years with open source technologies like: Kubernetes, Envoy, MongoDB, Docker and Apache Kafka, etc. It also spurred the company to develop its own in-house AI platform, ‘Krylov’, which was no easy feat; eBay designed its own specialized servers for the complex challenges that are unique to its massive data processing, hitting 300 billion data queries daily, with a data footprint of more than 500 petabytes. To put this into context, eBay’s data footprint is equivalent to 1 trillion songs, 2.5 million hours of movies, and enough space to back up the U.S. Library of Congress 300 times over. These milestones wouldn’t have been possible without the eBay engineers, scientists, and product managers focusing on the opportunity of the scale and technical challenges eBay’s marketplace presents. ‘Krylov’ is able to process these massive data volumes to offer a Computer Vision based similarity search which facilitates a more personalized and delightful shopping experience for eBay’s 174 million active buyers worldwide.

 

Mazen will detail how the team went about building the business use cases, put the right technologies in place and focused on building an organizational culture of collaboration and transparency to remove roadblocks and inspire the designs and solutions of the future.  He will talk about the challenges teams faced while implementing their AI platform, and transforming their backend infrastructure. He’ll share how eBay has evolved its 25 year-old organizational structure, spotlighting AI and data science at the core of its technology operations and offering guidance on strategies to build the right data science teams--and empower them.

Mazen Rawashdeh, SVP & Chief Technology Officer, eBay

In conversation with Kunle Olukotun, Chief Technologist & Co-Founder, SambaNova Systems and Cadence Design Professor of Electrical Engineering and Computer Science, Stanford University

11:35 AM – 11:55 AM PT 

Tech Showcase​

1. Trueface, CEO, Shaun Moore; How ‘Responsible Computer Vision’ can help reopen the Economy 

 

Trueface, a team of artificial intelligence experts holding multiple patents in the field, is on a mission to make the world safer and smarter for all, through the responsible use of computer vision. This mission has driven us to develop robust computer vision solutions that make our clients’ environments more secure and intelligent using their existing camera data. We strive to deliver technology that performs equally well across all ethnicities and genders. We publish  our analysis of a 3rd-party dataset in relation to our gender and ethnicity bias to demonstrate transparency and accountability.  

 

We are announcing a suite of post-COVID solutions that satisfy safety requirements that businesses must meet to reopen in the safest way possible. These solutions include touchless elevated temperature checks, PPE compliance (mask verification and social distancing), and identity management through face recognition for contactless access control. We have paired our technology with innovative camera manufacturers to run our solutions entirely on the chip of the camera i.e. at the edge, without requiring access to a backroom server. This technology is already helping industries reopen their doors and we are excited to share a demonstration with you. 

 

Trueface offers cutting edge solutions: SDKs, dockerized APIs, and a no-code solution that enables privacy and security through localized deployments that rest on client infrastructure.  This ensures the data is controlled, processed, and maintained only by the client and accessible to no one else.

2. Anyclip: President & CEO, Gil Becker ‘Connection, Conversation, & Conversion: How AnyClip is using deep learning computer vision to help organizations maximize the power of their video assets’

Now more than ever, video is the primary medium for communication between people and organizations. The common denominator for all organizations - They will do anything to attract us, the audience, to watch their videos on their websites and not elsewhere. Why - Because that is how brands increase their audience, their opportunities, and their revenue. They all share the same goal - to bring the potential money makers to their websites. The problem - their videos are unstructured - not searchable, sorted, and personalized. Frequently this maze of videos becomes impossible to navigate. And as a result, brands lose business.

 

By using patented AI and ML proprietary algorithms, AnyClip generates data from videos, allowing brands to automatically tag, analyze and curate videos into configurable channels and sub-channels, and create sorted, discoverable and searchable video hubs on their websites. Events, training, marketing promotions, product updates, insightful information - All personalized and accessible to various types of viewers - buyers, distributors, and employees. A "Netflix for Business" solution to address a for marketers’ top concern - How to attract viewers to their own websites.

The company uses a variety of deep learning models, such as CNN with 152 layers, LGBM, MLP, FCN-8s-VCS, each was chosen according to its strength for the given needs.

 

During this demo, AnyClip President & CEO Gil Becker, will ingest Technology Enterprises' (Cisco, IBM, VMWare, Microsoft) content, analyze and automatically curate massive video libraries into easily predefined channels and subchannels, then create one or multiple video 

hubs instantly. Insightful data and BI engines will sort channels according to trends, popularity and recency.

11:55 AM – 12:40 PM PT 

Virtual Networking & Lunch Break – Refuel and connect virtually with industry executives

11:55 AM –

12:40 PM PT

(Invite Only)

Executive Forum Roundtable – Back to the future: How the use of AI is leading advances in the transportation & automotive industry.

Notwithstanding the global lockdown, the world has experienced due to Covid-19, the transportation and automotive industry is a critical cog in keeping the world economy moving -- quite literally. The use of AI technologies like computer vision, ML-based predictive analytics and optimization, IoT, & edge computing among others, is leading advances in the transportation & automotive industry. These include autonomous self-driving vehicles and aeroplanes, enhanced passenger and public safety, optimized traffic management, and predictive maintenance, to name just a few. Join a select group of senior industry executives in this 45 minute roundtable to brainstorm and exchange ideas.

Presented by Xilinx

Moderated by Mark Fitzgerald, Director, Autonomous Vehicle Practice, Strategy Analytics

Chair Speakers:

Ramine Roane, VP of AI and Software, Xilinx

Dr. James Peng, Co-founder & CEOPony.ai

Dr. Song Han, Assistant Professor, Inventor of Deep Compression, MIT

Jim Kane, Vice President, Product Line Management, Magna

12:40 PM –

1:25 PM PT 

  IoT & AI

at the Edge / Computer Vision

PANEL The big technology trends in IoT/ AI at the Edge & Computer Vision and what the convergence of various technologies mean for business applications

In recent years, we have seen the confluence of some big technology trends such as the Internet of Things (IoT), deploying AI on devices rather than on the cloud, the rapid increase in compute power, and the advances in accuracy and speed of computer vision models. This is creating an ecosystem which can deliver powerful business applications that were hitherto unthinkable. But with these advances, comes the need for overall governance and security of these AI models, and training them to achieve the desired business outcomes. Join a panel of industry stalwarts to discuss these trends and considerations.

 

Nand Mulchandani, Acting Director and CTO, U.S. Department of Defense Joint Artificial Intelligence Center

Stacey Schulman, VP – IoT Group & Chief Innovation Officer – Retail, Hospitality, Banking and Education, Intel

Dr. Josh Sullivan, Head of Modzy

Anthony Robbins, Vice President, North America Public Sector, NVIDIA

Moderated by Shuchi Rana, Senior Director, HeadSpin

1:25 PM –

1:55 PM PT  

  IoT & AI

at the Edge / Computer Vision

PRESENTATION Accelerating software 2.0 for IoT & Edge

Researchers and scientists are pushing the boundaries of AI to create more complex models for image recognition, NLP, and other exciting use cases, but they are limited by current conventional system architectures. AI will be as transformative for the world as Henry Ford’s automobile was a hundred years ago, but to get there we need a new computer architecture that can run ever larger and more complex models, and that puts AI within reach of all organizations, not just the hyperscale companies that dominate the field today. In this session, Kunle will describe some of the exciting ways that researchers are pushing the limits of AI and what they need to go beyond the current constraints and allow AI to achieve its full potential.

 

With current state-of-the-art systems, developers are forced to do complicated cluster programming for multiple racks of systems and to manually program data parallelism and workload orchestration. This requires extreme specialization that puts AI out of reach for many organizations, and still lacks the computational power for today’s massive AI models. This talk will describe the need for a new software and hardware systems architecture that better supports the dataflow models used by today’s machine learning frameworks, and that can eliminate the deficiencies caused by the instruction sets that bottleneck conventional hardware today. Such a system, with a simpler programming model, would allow organizations of all sizes to run big data models with ease and simplicity.

 

Kunle will draw on his experience as founder of Afara Websystems (acquired by Sun Microsystems in 2002), as a Stanford University Electrical Engineering Professor, and as the current Chief Technologist of SambaNova Systems.

Kunle Olukotun, Chief Technologist & Co-FounderSambaNova Systems and Cadence Design Professor of Electrical Engineering and Computer ScienceStanford University

1:55 PM –

2:10 PM PT  

Virtual Networking & Break  Stand up, stretch, grab a coffee or tea and connect with speakers and attendees virtually 

1:55 PM –

2:40 PM PT  

(Invite Only)

Executive Forum Roundtable –  How industrial companies are making products faster, better and cheaper leveraging AI technologies such as robotics, computer vision and IoT at the edge.

2:10 PM –

2:35 PM PT 

  IoT & AI

at the Edge / Computer Vision

PRESENTATION How deep learning delivers ultra-high-definition content in animated features, using less than half the datacenter footprint and how you can apply that to your business

Since the discovery of convolutional neural networks, AI has shown great promise in solving a variety of image classification and generation problems. Because the final product of an animated film is a sequence of images, CGI is a natural testing ground for such deep learning algorithms. In recent years, deep learning has improved the detail and sharpness of upscaled images over traditional methods. Such technology was used to create the UHD version of Disney and Pixar’s most recent film Onward.

 

In this talk, Vaibhav will discuss how his team is bringing deep learned image super-resolution to feature film production. He’ll explain the motivation, the challenges, and lessons he learned in deploying deep learning models in a creative environment. In doing so, he will illuminate the full AI lifecycle, including tech stack selection, team selection, literature review, data collection, AI model training, experimentation, deployment, and addressing artist feedback. He will abstract these learnings to provide insights to the audience on how they can use deep learning in their businesses.  

Vaibhav Vavilala, Technical DirectorPixar Animation Studios

2:35 PM –

3:05 PM PT 

  IoT & AI

at the Edge / Computer Vision

PANEL DISCUSSION PRESENTED BY NVIDIA Digital Transformation Through AI at the Edge

Faster, data-driven decision making has long been understood as the competitive advantage for modern enterprises. The faster the insights, the better. The rate at which we are generating data grows exponentially, with billions of sensors in stores, hospital equipment, smart cameras, and connected manufacturing machinery. To derive insights from this constant flood of data, organizations need production-ready AI along with the compute power, workflow, and tools to enable these modern optimized algorithms.

 

Now, more than ever, organizations need to transform their operations and AI infrastructure to stay ahead. In this panel, you’ll hear from several industry leaders how their companies have leveraged cutting-edge AI technologies to bring real-time decision making to the point of action---from retail stores to manufacturing facilities. 

Jimmy Nassif, Head of IT Planning Systems, BMW Group

Matt Scott, Co-Founder and CEO, Malong Technologies

Jered Floyd, Office of the CTO, Red Hat

Moderated by Maribel Lopez, Founder & Principal Analyst, Lopez Research

3:05 PM –

3:30 PM PT

  IoT & AI

at the Edge / Computer Vision

PRESENTATION How synthetic datasets can help train real-world computer vision models

Deep learning models can perform significantly better when trained on very large datasets. Manually labeling large collections of real-world pictures is expensive and may not include outlier scenarios that can be relevant for models expected to work on complex environments. One way to tackle this problem is to use synthetic data which can be generated by simulating relevant scenarios using a game engine. And this can be applied in many industries, not just the gaming industry.

In this talk, Unity’s principal ML engineer will explore recent advances in machine learning and explain the role game engines play in the future of computer vision. He’ll summarize existing tools that can be used to generate perfectly labeled synthetic datasets for different perception tasks to reduce the cost of acquiring the labeled examples needed to train the model. It is still early days, but several research projects published in top conferences show how, in some cases, a model trained on real-world data is outperformed by a model trained on 100% synthetic data at a fraction of the cost. Cesar will provide takeaways to executives in other industries about how they can leverage these findings.

Cesar Romeo, Principal ML EngineerUnity Technologies

3:30 PM –

3:55 PM PT

  IoT & AI

at the Edge / Computer Vision

PRESENTATION How OHSU leveraged computer vision and IoT to help in the battle against Covid-19

Oregon Health & Science University (OHSU) used computer vision technology to dramatically increase its ability to manage risk, preserve personal protective equipment (PPE), and remotely monitor patients using state-of-the-art virtual technology in intensive care.

 

It did so by bringing together near real-time data from ventilators, patient monitoring systems, electronic medical records, labs, and other systems to allow one clinician to monitor several patients at once.  

 

In this session, OHSU’s CXO and GE Healthcare’s Chief Digital Officer Amit Phadnis will also discuss how OHSU leveraged virtual ICU technology to overcome the challenges of managing a large number of COVID-19 patients – across several locations – and help them receive the best care possible as well as ensure clinicians can work safely and efficiently during the pandemic.

COVID-19 has placed hospitals and health systems like OHSU under extreme pressure as operating protocols, systems and technologies need to be updated quickly to treat patients infected with the virus. Further, the virality and hyper-contagious nature of COVID-19 has also placed increased emphasis on the safety of clinicians and healthcare workers who are treating patients. To address these concerns, the adoption of advanced and emerging computer vision technologies, such as virtual ICU solutions, is accelerating to aid today’s growing hospital admissions.

Dr. Matthias Merkel, Chief Medical Officer, OHSU

Amit Phadnis, Chief Digital Officer, GE Healthcare

3:55 PM –

4:15 PM PT 

Main Stage

Women in AI Awards

4:15 PM –

4:25 PM PT 

Main Stage

Closing Remarks

Matt Marshall, CEOVentureBeat

Transform 202o

When? July 14-17, 2020

Hosted Online
Questions about Transform 2020? Contact us at events@venturebeat.com

  • Facebook Basic Black
  • Twitter Basic Black
  • Black Instagram Icon

#vbtransform

© 2020 VentureBeat Transform, the AI event of the year.