Google AI Researchers Propose TRASS Approach to Create Truly Intelligent Systems

Venturebeat | May 27, 2020

  • A preprint paper published by Stanford University and Google researchers proposes an AI technique that predicts how goals were achieved, effectively learning to reverse-engineer tasks.

  • They say it enables autonomous agents to learn through self-supervision, which some experts believe is a critical step toward truly intelligent systems.

  • A home robot could leverage it to learn tasks like turning on a computer, turning a knob, or opening a drawer, or chores like setting a dining table, making a bed, and cleaning a room.


Learning general policies for complex tasks often requires dealing with unfamiliar objects and scenes, and many methods rely on forms of supervision like expert demonstrations. But these entail significant tuning; demonstrations, for example, must be completed by experts many times over and recorded by special infrastructure.


That’s unlike the researchers’ proposed approach — time reversal as self-supervision (TRASS) — which predicts “reversed trajectories” to create sources of supervision that lead to a goal or goals. A home robot could leverage it to learn tasks like turning on a computer, turning a knob, or opening a drawer, or chores like setting a dining table, making a bed, and cleaning a room.
 

Most manipulation tasks that one would want to solve require some understanding of objects and how they interact. However, understanding object relationships in a task-specific context is non-trivial,” explain the coauthors.

Consider the task [making a bed]. Starting from a made bed, random perturbations to the bed can crumple the blanket, which when reversed provides supervision on how to flatten and spread the blanket. Similarly, randomly perturbing objects in a clean [or] organized room will distribute the objects around the room. These trajectories reversed will show objects being placed back to their correct positions, strong supervision for room cleaning


Read More: GOOGLE TO FACE ANTITRUST CHARGES FOR MONOPOLIZING ONLINE ADS


TRASS works by collecting data given a set of goal states, applying random forces to disrupt the scene, and carefully recording each of the subsequent states. A TRASS-driven agent explores outwardly using no expert knowledge, collecting a trajectory that when reversed can teach the agent to return to the goal states. In this way, TRASS essentially trains to predict the trajectories in reverse so that the trained model can take the current state as input, providing supervision toward the goal in the form of a guiding trajectory of frames (but not actions).
 

At test time, a TRASS-driven agent’s objective is to reach a state in a scene that satisfies certain specified goal conditions. At every step the trajectory is recomputed to produce a high-level guiding trajectory, and the guiding trajectory decouples high-level planning and low-level control so that it can be used as indirect supervision to produce a policy via model and model-free techniques.


In experiments, the researchers applied TRASS to the problem of configuring physical Tetris-like blocks. With a real-world robot — the Kuka IIWA — and a TRASS vision model trained in simulation and then transferred to the robot, they found that TRASS successfully paired blocks it had seen during training 75% of the time and blocks it hadn’t seen 50% of the time over the course of 20 trials each.
 

TRASS has limitations in that it can’t be applied in cases where object deformations are irreversible, for example (think cracking an egg, mixing two ingredients, or welding two parts together). But the researchers believe it can be extended by using exploration methods driven by state novelty, among other things.
 

“Our method … is able to predict unknown goal states and the trajectory to reach them,” they write. “This method used with visual model predictive control is capable of assembling Tetris-style blocks with a physical robot using only visual inputs, while using no demonstrations or explicit supervision.”

Read More: GOOGLE'S AI-POWERED ROBOT CAN GRASP OCCLUDED OBJECTS, DO NEVER-SEEN-BEFORE TASKS
 

Spotlight

Nestled in the woods of upstate New York along the Hudson River, TERA is a sustainable home for this planet, available on a nightly basis for anyone wanting to experience what the future of sustainable life will be like on Earth and beyond. It's NASA-tested design and materials can be composted back to Earth at the end of its life.


Other News
SOFTWARE

Apiiro Launches Partner Program to Help Customers Fix Cloud-Native Application Risks Faster

Apiiro | June 03, 2022

Apiiro, the leader in Cloud-Native Application Security, today announced the Apiiro Partner Program, which provides comprehensive support for technology, consulting, and reseller partners across the Cloud-Native Application Protection Platform (CNAPP) ecosystem. In the era of cloud-native application development, the remediation lifecycle is getting longer and more complex because risks are distributed across the design, code, open source packages, infra-as-code, containers, Git and CI/CD servers, and cloud infrastructure. In addition, the shift in responsibilities and the use of a multitude of tools, each addressing only a small subset of cloud-native application risks has reduced the overall efficacy with noisy alerts and false-positives due to the lack of context. Context from cloud infrastructure to code and Software Bill Of Materials visibility (SBOM) are instrumental for the remediation process across the software supply chain. Partners like Alacrinet, Defy Security, Google Cloud, HashiCorp, NetSPI, NXGN, Parabellyx, and Trace3 from the cloud-native application security, DevOps, cloud infrastructure security, and other cybersecurity industries are joining the Apiiro Partner Program to work together to help customers remediate cloud-native application risks across the software supply chain. Partners will benefit from the Apiiro Risk Graph technology and enabling resources to speed customer adoption and success with a contextual shift left risk remediation technology. "Our customers aren't just modernizing their cloud-native application security - they're reinventing the way they develop, build, and deploy cloud-native applications across the software supply chain. By uniting in the Apiiro Partner Program, Apiiro and our partners can collectively ensure cloud-native applications are developed and delivered in a secure manner." John Leon, VP of Business Development at Apiiro Program benefits include training materials and sales resources, access to technical evaluation demos and documentation to enable go to market and joint promotion opportunities. By enrolling in the Apiiro Partner Program, partners can increase their value to customers by delivering contextual shift left risk remediation before releasing to the cloud. About Apiiro Apiiro helps security and development teams proactively fix risk across the software supply chain, before releasing to the cloud.

Read More

FUTURE TECH

Intel Open-Sources SYCLomatic Migration Tool

Intel | May 21, 2022

Intel has released an open source tool to migrate code to SYCL1 through a project called SYCLomatic, which helps developers more easily port CUDA code to SYCL and C++ to accelerate cross-architecture programming for heterogeneous architectures. This open source project enables community collaboration to advance adoption of the SYCL standard, a key step in freeing developers from a single-vendor proprietary ecosystem. “Migrating to C++ with SYCL gives code stronger ISO C++ alignment, multivendor support to relieve vendor lock-in and support for multiarchitecture to provide flexibility in harnessing the full power of new hardware innovations. SYCLomatic offers a valuable tool to automate much of the work, allowing developers to focus more on custom tuning than porting,” said James Reinders, Intel oneAPI evangelist. Why It Matters While hardware innovation has led to a diverse heterogeneous architectural landscape for computing, software development has become increasingly complex, making it difficult to take full advantage of CPUs and accelerators. Today’s developers and their teams are strapped for time, money and resources to accommodate the rewriting and testing of code to boost application performance for these different architectures. Developers are looking for open alternatives that improve time-to-value, and Intel is providing an easier, shorter pathway to enabling hardware choice. What is SYCL and Project SYCLomatic SYCL, a C++-based Khronos Group standard, extends C++ capabilities to support multiarchitecture and disjoint memory configurations. To initiate this project, Intel open-sourced the technology behind its DPC++ Compatibility Tool to further advance the migration capabilities for producing more SYCL-based applications. Reusing code across architectures simplifies development, reducing time and costs for ongoing code maintenance. Utilizing the Apache 2.0 license with LLVM exception, the SYCLomatic project hosted on GitHub offers a community for developers to contribute and provide feedback to further open heterogeneous development across CPUs, GPUs and FPGAs. How the SYCLomatic Tool Works SYCLomatic assists developers in porting CUDA code to SYCL, typically migrating 90-95% of CUDA code automatically to SYCL code2. To finish the process, developers complete the rest of the coding manually and then custom-tune to the desired level of performance for the architecture. How Code Migration Usage Works Research organizations and Intel customers have successfully used the Intel DPC++ Compatibility Tool, which has the same technologies as SYCLomatic, to migrate CUDA code to SYCL (or Data Parallel C++, oneAPI’s implementation of SYCL) on multiple vendors’ architectures. Examples include the University of Stockholm with GROMACS 20223, Zuse Institute Berlin (ZIB) with easyWave, Samsung Medison and Bittware (view oneAPI DevSummit content for more examples). Multiple customers are also testing code on current and upcoming Intel Xe architecture-based GPUs, including Argonne National Laboratory Aurora supercomputer, Leibniz Supercomputing Centre (LRZ), GE Healthcare and others. Where to Get SYCLomatic SYCLomatic is a GitHub project. The GitHub portal includes a “contributing.md” guide describing the steps for technical contributions to the project to ensure maximum ease. Developers are encouraged to use the tool and provide feedback and contributions to advance the tool’s evolution. “CRK-HACC is an N-body cosmological simulation code actively under development. To prepare for Aurora, the Intel DPC++ Compatibility Tool allowed us to quickly migrate over 20 kernels to SYCL. Since the current version of the code migration tool does not support migration to functors, we wrote a simple clang tool to refactor the resulting SYCL source code to meet our needs. With the open source SYCLomatic project, we plan to integrate our previous work for a more robust solution and contribute to making functors part of the available migration options,” said Steve (Esteban) Rangel of HACC (Hardware/Hybrid Accelerated Cosmology Code), Cosmological Physics & Advanced Computing (anl.gov). Resources for Developers SYCLomatic project on GitHub | Contributing.md guide Get started developing: Book: Mastering Programming of Heterogeneous Systems using C++ & SYCL | Essentials of SYCL training CodeProject: Using oneAPI to convert CUDA code to SYCL Intel DevCloud: A free environment to access Intel oneAPI Tools and develop and test code across a variety of Intel® architectures (CPU, GPU, FPGA). Notes 1SYCL is a trademark of the Khronos Group Inc. 2Intel estimates as of September 2021. Based on measurements on a set of 70 HPC benchmarks and samples, with examples like Rodinia, SHOC, PENNANT. Results may vary. 3The GROMACS development team ported its CUDA code to Data Parallel C++ (DPC++), which is a SYCL implementation for oneAPI, in order to create new cross-architecture-ready code. See also Experiences adding SYCL support to GROMACS, and GROMACS 2022 Advances Open Source Drug Discovery with oneAPI. About Intel Intel is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore’s Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers’ greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better.

Read More

FUTURE TECH

Union.ai releases UnionML for seamless creation of web-native machine learning applications

Union.ai | June 10, 2022

Union.ai, provider of the open-source workflow orchestration platform Flyte and its hosted version, Union Cloud, today announced the release of UnionML at MLOps World 2022. The open-source MLOps framework for building web-native machine learning applications offers a unified interface for bundling Python functions into machine learning (ML) microservices. It is the only library that seamlessly manages both data science workflows and production lifecycle tasks. This makes it easy to build new AI applications from scratch, or make existing Python code run faster at scale. UnionML aims to unify the ever-evolving ecosystem of machine learning and data tools into a single interface for expressing microservices as Python functions. Data scientists can create UnionML applications by defining a few core methods that are automatically bundled into ML microservices, starting with model training and offline/online prediction. "Creating machine learning applications should be easy, frictionless and simple, but today it really isn't., The cost and complexity of choosing tools, deciding how to combine them into a coherent ML stack, and maintaining them in production requires a whole team of people who often leverage different programming languages and follow disparate practices. UnionML significantly simplifies creating and deploying machine learning applications." Union.ai CEO Ketan Umare UnionML apps comprise two objects: Dataset and Model. Together, they expose function decorator entry points that serve as building blocks for a machine learning application. By focusing on the core building blocks instead of the way they fit together, data scientists can reduce their cognitive load for iterating on models and deploying them to production. UnionML uses Flyte to execute training and prediction workflows locally or on production-grade Kubernetes clusters, relieving MLOps engineers of the overhead of provisioning compute resources for their stakeholders. Models and ML applications can be served via FastAPI or AWS Lambda. More options will be available in the future. About Union.ai Union.ai helps organizations deliver reliable, reproducible and cost-effective machine learning and data orchestration built around open-source Flyte. Flyte is a one of a kind workflow automation platform that simplifies the journey of data scientists and machine learning engineers from ideation to production. Some of the top companies, including Lyft, Spotify, GoJek and more, rely on Flyte to power their Data & ML products. Based in Bellevue, Wash., Union.ai was started by founding engineers of Flyte and is the leading contributor to Flyte.

Read More

AI TECH

Deci Joins NVIDIA Metropolis to Accelerate Unparalleled AI Inference Performance

Deci | August 08, 2022

Deci, a deep learning company harnessing AI to build AI, today announced it has joined NVIDIA Metropolis — a partner program, application framework, and set of developer tools that bring to market a new generation of applications and solutions to make the world’s most important spaces and operations safer and more efficient with advancements in AI vision. Deci enables AI developers to build, optimize and deploy best-in-class deep learning models, delivering high accuracy tailored for any dataset, inference hardware, speed, and size requirements. Its platform enables unparalleled inference performance on the NVIDIA Jetson edge AI platform — which includes the Jetson Orin, AGX Xavier, Xavier NX and Nano modules — as well as server-based NVIDIA GPUs. With Deci, vendors can deploy complex models onto smaller edge devices, thus achieving real-time latency and maximizing hardware utilization. NVIDIA Metropolis makes it easier and more cost effective for enterprises, governments, and integration partners to use world-class AI-enabled solutions to improve critical operational efficiency and safety problems. The NVIDIA Metropolis ecosystem contains a large and growing breadth of members who are investing in the most advanced AI techniques and most efficient deployment platforms, while using an enterprise-class approach to their solutions. Members have the opportunity to gain early access to NVIDIA platform updates to further enhance and accelerate their AI application development efforts. Further, the program offers the opportunity for members to collaborate with industry-leading experts and other AI-driven organizations. “We are honored to be part of NVIDIA Metropolis and confident that our participation will enable us to reach more customers to support them in successfully deploying world-changing AI solutions. “AI teams can rely on Deci’s platform to build and optimize top-notch models at the edge, a real game changer for enterprises seeking to innovate.” Yonatan Geifman, CEO and co-founder of Deci About Deci Deci is a deep learning development platform, powered by a proprietary Neural Architecture Search technology. AI developers use Deci to build, optimize and deploy best-in-class deep learning models that are tailored for any task, dataset, inference hardware, and performance targets. Leading AI teams use Deci to accelerate inference performance, enable new use cases on edge devices, shorten development cycles and reduce inference computing costs. Founded by Yonatan Geifman, PhD, Jonathan Elial, and Professor Ran El-Yaniv, Deci's team of deep learning engineers and scientists is dedicated to eliminating production-related bottlenecks across the AI lifecycle.

Read More

Spotlight

Nestled in the woods of upstate New York along the Hudson River, TERA is a sustainable home for this planet, available on a nightly basis for anyone wanting to experience what the future of sustainable life will be like on Earth and beyond. It's NASA-tested design and materials can be composted back to Earth at the end of its life.

Resources