.

Welcome to Scailable.

Contact us
1. You simply fit your ML and AI models using your favorite tools.
2. We transpile your fitted model to WebAssembly automatically.
3. And now you can enjoy extremely low latency inferences.
Want to join us in private beta? Sign up here.
What is Scailable?

With Scailable we provide automatic, single-line-of-code conversion of AI and ML models to WebAssembly; "a safe, portable, low-level code format designed for efficient execution and compact representation.".

Subsequently, we allow you to generate extremely fast inferences on our cloud, on yours, or even on the edge. We provide extremely low latency evaluations of most common machine learning models without vendor lock-in.

Want to take us for a spin? Check out our demo's:

  1. See how we can use WebAssembly to generate complex inference (using Bayesian Additive Regression Trees) both in the cloud and in the browser: https://www.scailable.net/demo/avm/.
  2. See how fast inferences can be: https://www.scailable.net/demo/bench/. And yes, that's microseconds.

Cutting-edge technology does not exist in a vacuum. We rely on many open source projects to make our stack work, and we collaborate with academic institutions and research labs. We benefit from collaborations with ​the Jheronimus Academy of Data Science,(itself a collaboration between the ​Tilburg University and the ​Technical University of Eindhoven) and the nth-iteration lab, a research lab developing machine learning and AI methods for personalization.

Fast, and easy.

Yeah, transpiling sounds technical, and if you ever allow yourself the time to play with WebAssembly you will quickly end up in "compiler hell". But, we fixed all of that for you. We provide simple python and R packages to transpile your favorite ML or AI model to WebAssembly with only one line of code:

Single line deployment of models

To make things even easier, we will host the resulting optimized model for you on our cloud if you wish. That means that seconds after fitting your model you can consume it as a simple REST endpoint at extremely high speeds, for a fraction of the costs it would take to host an alternative yourself.

Accessible REST ndpoints

After transpiling your Scailable-tasks is available to you at any time, at any scale, by simply POST-ing the details of your task to

https://sclbl.net/run/:cfid
where :cfidspecifies the identifier of your model.

You are fully in control of the REST endpoints that enable your inferences. Next to running in our won cloud, we provide small and efficient containers to run your inference task in your data centre, in a webrowser, or on an IoT device. We make it possible to use AI at scale anywhere, anytime.

Consuming your endpoint is easy:

        curl --location --request POST 'https://sclbl.net/run/:cfid' \
        --header 'Content-Type: application/json' \
        --data-raw '{"input": [[1.02142857, 1.11764706, 0.8, 0.51351351]]}'
        

Try it here: https://www.scailable.net/demo/avm/.

Contact us

If you want us to help you generate extremely low latency inferences let us know! Please, fill out the form below, or give us a call at (+31) 0619196802.

Created in 2019 with ♥. For questions, contact us at go [at] scailable [dot] net.
For the latest stories, check out our blog.