Embedding Intelligence in Every Device, Everywhere

Make your winning AI model lightweight,hassle-free

Accelerated by

accelerators

Solution Built by World Class Team

team
smaller
model size

up to 95%

smaller model size

faster
inference speed

up to 25x

faster inference speed

saving
inference cost

up to 80%

saving inference cost

Problem

Fed up with malfunctioning AI compression tools?

  • x icon

    Poor compression results

  • x icon

    Model compilation errors with no clear feedback

  • x icon

    Buggy frameworks with poor or non-existing support

angry pompom face

We've tried them all. Nothing worked, so we built our own, from scratch.

solution

Experience a toolkit that just works.

Import your PyTorch Model and let our engine do the hard work

Compression Engine

Streamline the compression process and focus on your SOTA AI Model

auto compression engine
compression engine editor
Coming Soon

Graph Editor

A Graph Editor to help you pre- and post-process any AI Model

vega logo
vega graph editor
coverage

We've got you covered.

We are constantly adding new layers and operations to keep you up-to-date.

Model Support

Vision-CNNVision-ViTAudioLanguageMulti Modal

Framework Support

TensorRTONNX RuntimeTFLiteOpenVINOCoreML
Full Support
Partial Support
Coming Soon

*Check full list of supported layers/operationshere

Do you need support for a special layer?Let us know what you need

performance

What you see is what you get

We don't "cherry-pick" the results for marketing's sake.These are the results with our SDK's default settings without any finetuning

Post Compression Accuracy Retention Comparison

Model TypeObject DetectionClassificationSuper ResolutionTransformer
Model NameYoloV7RetinaFace-ResNet50ResNet18MobilenetV3 LargeIMDNViT B16
ORIGINAL

Comparison Point

53.254.769.775.327.981.07
CLIKANo LossNo LossNo Loss-0.6-0.1-1.4
INTEL

NNCF

No Loss-1.25-0.5-4.9-1.94-6.4
Meta

PyTorch

-3.2-1.8-0.7-2.0-1.2FAILED
Nvidia

TensorRT

-21.8-0.6-0.6FAILED-9.0-
Google

TFLite

N/AN/AFAILEDFAILEDN/AN/A
benefits

Ultimate Inference Optimization

Don't compromise on anything.Achieve both superior performance and cost benefits.

Enhance Your UX

Deliver your AI models to more users from more application

  • Discover new markets with On-Device AI
  • Better engage users with faster AI speed

Save On Operation Costs

Make your AI projects profitable with inference cost optimization

  • Optimize hardware investment
  • Reduce inference cost on cloud

Wish your AI compression and compiling jobs would magically just work?

© 2024 CLIKA Inc.

pom face
linkedin
github
facebook
youtube