Train with up to 10x less data

Train with up to 10x less data

Slash the cost of acquisition and labelling of training datasets without sacrificing performance. Build state-of-the art solutions faster.

Reduce size of models

Use less power on the edge

Increase battery life of mobile devices by deploying smaller models. Capture savings at scale by deploying solutions on lighter chips.

Boost accuracy

Start using in minutes

Nothing to learn. No professional services required. Simply download our docker to use Upstride in your existing system, in the cloud or on-premises.

Integrate with current workflows

Strengthen current workflows

No trade-off. UpStride powers all NNets and optimization techniques from synthetic data to transfer learning all the way to quantization and more.

Our API is a game changer for hundreds of use cases













Boost in Performance


Training with less data


More compact models

How it works

We have reshaped the very foundations of deep-learning computing

We built a new and unique datatype, a result from five years of R&D. It encapsulates more information than traditional tensors, vectors and floating points, learning more from 2D and 3D images.

Contact us

UpStride layers data type

Meet our API

« 1 » Create a JSON file to describe your training scenario based on your dataset and neural-network architecture.

« 2 » Train in the cloud or on-premises and start using the model in your inference tasks.

« 3 » Fully compatible with traditional model optimization techniques such as transfer learning, data augmentation, quantization, pruning, etc.

Contact us

"nr_epochs": 30,
"optimizer": "Adam",
"loss": "sparse_categorical_crossentropy",
"learning_rate": 0.001,
"batch_size": 32,
"backend": "upstride",
"dataset": "cifar10”,
"modelcheckpoint_freq": 5,
"layers": [
"Conv2D(32, (3, 3), strides=(1, 1), activation=tf.nn.relu, padding='SAME')",
"Conv2D(32, (3, 3), strides=(1, 1), activation=tf.nn.relu, padding='SAME')",
"MaxPooling2D(pool_size=(3, 3), strides=(2, 2), padding='SAME')",
"Dense(64, activation=tf.nn.relu)",

read more

We focus on two of the main challenges in Deep Learning

Training with less data

Deep learning is incredibly data hungry. This prevents companies to make the best of their usually noisy, incomplete, and imbalanced datasets. Ultimately, it hinders the deployment of deep learning-powered systems in production. Additionaly, while proper image annotation is essential to train a neural network, it's still costly, error-prone and time-consuming. The future of intelligent systems should be about less data, not more.

Increasing model's power efficiency for embedded systems

In five years from now, intelligence will be deployed everywhere. Everyday objects, capable of visually understanding the world, will permeate our personal and professional lives. Running deep-learning systems on the edge will require power-efficient algorithms that can perform a large amount of calculations in lighter chips. The future of intelligent systems should be about compact and efficient models.

Contact our team

We would love to talk about how we could work together

Contact info:

Interested in:

Boosting accuracy
Train with less data
Reduce size of models

More info?

Copyright © 2019 UpStride SAS. All Rights reserved