Big Data

Big Data is opening up new opportunities for businesses, which have not been tapped before. It is helping enterprises uncover the insights lost in noisy and voluminous data. Sasken has those niche domain skills in distributed computing, polyglot storage and data ingestion to bring life out of your data. Sasken has technology offerings from consulting to implementation/maintenance to handle all your big data needs.

We have trained, certified experts to make sure you do it right the first time

Sasken’s Expertise

Sasken’s Expertise

Sasken Offerings

  • Experiments & Labs
    • Assist in building POCs for business case formulation
    • MVP (Minimum-Viable-Product) Implementation
    • Quick 1-2 week experiments to benchmark performance, feasibility
    • NoSQL Databases Viability Tests
  • Monolithic-to-Polyglot Migrations
    • Move current large scale data-warehouses to Apache Hadoop/Hive and Spark/Amazon Redshift
    • Add Apache Hadoop/Hive as a batch processing engine to supplement existing data processing pipelines
    • Use Hadoop/Hive as a historical backup/archive store.
  • Setting up Big Data CoE (in-premise & cloud)
    • Hiring top talent – Developers & Admin
    • Setting up Infrastructure (in-premise & cloud)
    • Trainings
  • Greenfield Big Data Platforms
    • IoT Platforms
    • High volume, real-time CDR (call detail record) Data Ingestion platforms
    • Platforms for Social Network Analytics
    • High performance geo spatial applications

Skills We Work On

Database_new               Data_Processing               Data_Ingestion


Relational Databases (MySQL, Postgres, IBM DB2, Oracle, SQL Server) | NoSQL Document Stores (MongoDB) | Columnar Databases (Apache Hbase, IBM Netezza) | Graph Databases (Neo4J) | MPP Databases (Amazon Redshift, IBM Netezza) | Hadoop Data Warehouses (Apache Hive) | Huge Geo-Spatial Databases

Data Processing

Hadoop | Apache Spark

Data Ingestion

Apache Kafka | Apache Flume | Amazon Kinesis