OrbNet AI Innovation Lab
Introducing the new OrbNet AI Innovation Lab. This "virtual" innovation lab was created as a strategic asset to OrboGraph and our clients.
Led by Avikam Baltsan, Co-President & CTO of OrboGraph, in conjunction with data scientists & AI architects with expertise in data intelligence, analytics, scoring and computer vision/machine learning/deep learning, the innovation lab's primary goal is to formalize a process where Artificial Neural Network (ANN)-based products are developed with a faster time to market with optimal performance levels. The underlying technology incorporated into the OrboAnywhere and OrboAccess lines of business are named OrbNet AI with Deep Learning Models.
There are multiple benefits to formalizing an innovation lab:
- Defines the technology foundation: In this case, focusing on AI and deep learning.
- Reinforces our company charter: To solve difficult challenges in the financial and healthcare payments industries involving AI-based image processing and computer vision.
- Formalizes the development process and aligns with industry leaders: This methodology is one which large organizations are familiar with. See how these 31 companies have implemented an innovation lab concept.
- Demonstratable results: When blended with agile development, one can demonstrate the outcomes much faster compared to traditional development.
- Provides early access for POC: Existing and prospective business partners, financial institutions and healthcare RCM companies can obtain a pre-release of software functionality or have access to software remotely, for proof of concept (POC) testing of new product ideas.
- By testing new system capabilities during a POC stage, the organization is empowered with verified results during for a more informed decision process.
- An early adoption partner may also have influence on the final release content.
The OrbNet AI Innovation Lab is comprised of home office and remote employees around the world performing development and product management functions.
A New Development Process
Traditional algorithm-centric development requires intensive coding effort and algorithmic skills. The new AI-based development cycle is based on a data-driven process. Our models learn directly from large sets of labeled data, offering faster and more accurate prediction results.
The various stages of our new development cycle are summarized below:
- Problem definition: Includes the type of input each model receives as well as the type of output each model should predict.
- Comprehensive dataset creation for supervised (and unsupervised) learning.
- Creation of the model training infrastructure.
- Deep learning model creation and development, including layer optimization.
- Once the dataset is processed by the model, the outcome is a trained model ready for specific task(s) and processing.
- Integration of models into a line of business modules and applications, i.e., OrboAnywhere and OrboAccess.
Once these stages are completed, the applications are ready for processing new data. In AI terminology, this is called inference:
- "Inference" is defined as the process of deploying a model and processing incoming data (i.e., images or video) to look for and identify whatever it has been trained to recognize.
- Model reinforcement can then be completed with on-going system defect retrieval obtained from production work. A root cause analysis is completed on defect items.
The output of this development cycle has been officially branded as "OrbNet AI Technology."
OrbNet AI Technology
The principles, nomenclature, and underlying technology within OrbNet AI are generally new to banking and healthcare remittance processing. To start, AI with deep learning models are not based on recognition algorithms nor reliant upon OCR toolkits.
The OrbNet AI innovation lab develop team is working with a wide range of deep learning technologies delivered in highly optimized models.
Components and platforms incorporated into the platform include Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Gradient Boosted Decision Trees, Gen-II text classification models, TensorFlow RT framework, and CUDA drivers for optimal inference on NVIDIA and other GPUs.
Graphical Processing Units
Unleashing the power of OrbNet AI is contingent upon accessibility to significant processor capacity. Graphical Processing Units (GPU) have become the defacto standard for deep learning models.
In computing, floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance. This is useful in fields of scientific computations that require floating-point calculations. As an example, one of the OrbNet AI models can consume between 1 billion (1,000,000,000) FLOPS to 10 billion (10,000,000,000) FLOPS per CAR field.
This may seem like an enormous overhead to the layperson, but the NVIDIA Tesla V100 (one of their strongest GPU families) performs 14 teraFLOPS (14,000,000,000,000) single precision.
The result is that these individual calculations by field can be easily scaled with today's hardware.
OrbNet AI Process Flow
The internal process for recognizing fields on checks and full-page documents is dramatically different from traditional OCR, ICR, and CAR/LAR recognition techniques. A benefit of this process is that with greater processing capacity, the recognition process can be collapsed into the following stages:
- Field Detection: Referred to as "object detection" in AI nomenclature, this deep learning model will identify the ROI (Region Of Interest), locking onto the coordinates of the field. Multiple fields can be detected simultaneously rather than serially.
- Text Classification: Deep learning models will identify and deliver a classification specific to the fields identified on a document. We use targeted techniques applied to checks, money orders, EOBs, and correspondence letters. OrbNet AI continues to be enhanced with other document types as well.
- Interpretation: The output value(s) and scores represent the probability of success, normalized via several internally deployed decision tree models for optimal performance.