We're hiring!
Boost accuracy

Boost accuracy

Improve accuracy without any change in code, neural-network architecture, or datasets used for training.

Train with up to 10x less data

Train with up to 10x less data

Learn more from your existing datasets or slash the cost of acquiring and labelling extra data without sacrificing performance.

Reduce size of models

Reduce size of models

Increase battery life of mobile devices by deploying smaller models. Add intelligence to your embedded solutions.

Integrate with current workflows

Integrate with current workflows

Start using it immediately. Our API is seamlessly integrated with TensorFlow.

Our API is a game changer for hundreds of use cases

Automobile

Automobile

Healthcare

Healthcare

Manufacturing

Manufacturing

Retail

Retail

Defense

Defense

Results

+30%

Boost in Performance

-80%

Training with less data

-50%

More compact models

How it works

We have reshaped the very foundations of deep-learning computing

We built a new and unique datatype, a result from five years of R&D. It encapsulates more information than traditional tensors, vectors and floating points, learning more from 2D and 3D images.

Contact us

UpStride layers data type

Meet our API

« 1 » Create a JSON file to describe your training scenario based on your dataset and neural-network architecture.

« 2 » Train in the cloud or on-premises and start using the model in your inference tasks.

« 3 » Fully compatible with traditional model optimization techniques such as transfer learning, data augmentation, quantization, pruning, etc.

Contact us

{
"nr_epochs": 30,
"optimizer": "Adam",
"loss": "sparse_categorical_crossentropy",
"learning_rate": 0.001,
"batch_size": 32,
"backend": "upstride",
"dataset": "cifar10”,
"modelcheckpoint_freq": 5,
"layers": [
"Conv2D(32, (3, 3), strides=(1, 1), activation=tf.nn.relu, padding='SAME')",
"Conv2D(32, (3, 3), strides=(1, 1), activation=tf.nn.relu, padding='SAME')",
"MaxPooling2D(pool_size=(3, 3), strides=(2, 2), padding='SAME')",
"Dropout(0.2)",
"Dense(64, activation=tf.nn.relu)",
"Dropout(0.3)",
"Dense(10)",
"Activation(activation=tf.nn.softmax)"
]
}


read more

We focus on two of the main challenges in Deep Learning

Training with less data

Deep learning is incredibly data hungry. This prevents companies to make the best of their usually noisy, incomplete, and imbalanced datasets. Ultimately, it hinders the deployment of deep learning-powered systems in production. Additionaly, while proper image annotation is essential to train a neural network, it's still costly, error-prone and time-consuming. The future of intelligent systems should be about less data, not more.

Increasing model's power efficiency for embedded systems

In five years from now, intelligence will be deployed everywhere. Everyday objects, capable of visually understanding the world, will permeate our personal and professional lives. Running deep-learning systems on the edge will require power-efficient algorithms that can perform a large amount of calculations in lighter chips. The future of intelligent systems should be about compact and efficient models.

Contact our team

We would love to talk about how we could work together

Contact info:

Interested in:

Boosting accuracy
Train with less data
Reduce size of models

More info? hello@upstride.io

Copyright © 2019 UpStride SAS. All Rights reserved