Leverage mCentric's Expertise

mCentric has more than 6 years in-depth experience in designing, deploying and tuning large scale Hadoop implementations. Most of the implementations include a continuous real-time analytics stream and loading of Operational Analytics sinks and Data Lakes. Our experts have production hands-on experience ingesting network traffic data at 10Gbps, Billions of events per day.

Leverage our expert Consulting Services team, to employ best practices for your Hadoop deployments and optimize the performance and scalability of existing Hadoop clusters to ensure that the Hadoop architectural advances deliver the business tangible value.

We offer a range of tailored expert consulting engagements that align with each stage of your project, from inception through to post-production launch:

  • Hadoop Deployment Architecture
  • Hybrid in-house and Cloud Hadoop deployments
  • Hadoop Implementation
  • Analytics & Machine Learning
  • Telco System Audits
  • Hadoop Upgrade Services
  • Hadoop Health Check

We can also design custom engagements to meet your needs, to inquire about our services, please complete the form on this page.

    Hadoop Deployment Architecture Engagement

    • Planning a new Hadoop deployment or struggling to reach the throughput required in your Hadoop environment?
    • Designing a new system or migrating an existing system around real-time data and stream processing that must ingest millions or billions of events per day?

    Our Hadoop Architecture engagement will involve mCentric experts; a generalist across systems and Hadoop and subject matter experts for the specific Hadoop components to be employed in the project. Our experts work hand-in-hand your technical team to assess the Hadoop project you're planning or to review your existing deployment. We'll share best practices, assess design trade-offs, and flag potential areas of risk to ensure that your team's projects are designed and built correctly.

    The discussion and design documents deliverables will be catered to your use cases and will involve recommendations and best practices in a variety of areas, including:

    • Systems design for scalability & high availability
    • Design to guarantee data delivery & processing
    • Geo-redundancy and Multi-datacenter data flow architecture
    • Integration with internal and external systems
    • Design for real-time analytics processing applications

    Cloud Deployment or Hybrid Approach

    Big Data and Hadoop in the Cloud is becoming increasingly popular as enterprises gain confidence in the security of the cloud for their platforms and/ or look to a hybrid on-premises & cloud approach. mCentric has production systems experience in Amazon's AWS and Google's GCP environments. Within the Cloud, there are many different permutations for the compute, storage and networking layers, which can all have an effect on the performance and operational characteristics of your clusters. This engagement will provide a mCentric expert to work alongside your technical teams to assess an upcoming cloud deployment. They will assess design trade-offs, share known best practices, and provide the recommendations to your team, based on our own testing and knowledge taken from other existing deployments. The goal is a Hadoop deployment optimized for production in GCP or AWS which considers the following:

    • Performance and storage goals
    • Integration with in-house clusters
    • Cluster resilience
    • Disaster recovery or geo-replication across regions
    • The optimal instances and associated storage
    • Deployment methods using the latest infrastructure and configuration management tools
    • Data Protection and Security requirements

    Hadoop Implementation Engagement

    mCentric has a range of experts all with large scale production experience thus vastly reducing the risk of design and implementation errors due to "well" the prototype worked fine...". Depending on the scope of the project mCentric would recommend the requisite profiles to execute or assist with the implementation at the given times of the project life-cycle. mCentric's subject matter experts are as follows:

    • Technical Project Management, design validation practices to ensure business alignment
    • Data pipeline Processing Engineers, tools expertise in Flume, Kafka, Google Pub/Sub, AWS Kinesis, Scala, Java and C programming, Google Tag Manager, Diameter, Radius, ASN.1, Syslog, etc.
    • Complex Events Processing Engineers, expert in mCentric's CEP, expert in Spark stream processing and senior in Scala, Java and C programming.
    • Machine Learning and Natural Language Processing, Data Scientists with expertise in Spark, algorithms and models selection, supervised data sets definition and generation for models tuning and unsupervised results analysis and visualization
    • Data Sink Engineers, expert in; HDFS, HBase, Impala and Google BigQuery our experts have years of experience designing for performance throughput both for ingesting, indexed access, scanning access and archival purposes.
    • Execution Environment Engineers, expertise in Google Compute Cloud and AWS virtual private cloud configurations and connectivity configurations for virtual private networks

    Analytics & Machine Learning Engagement

    • mCentric offers custom machine learning consulting that brings modern machine learning tools to you.
      • PhD-trained machine learning experts - Experts on our team are PhD-trained scientist with experience deploying machine learning tools for many different problems and industries. Let's set-up an introductory meeting so you can meet our team.
      • Find structure in any data - Often the data we want or need isn't "analysis ready" from the outset. Thankfully our raw data expertise makes quick work of any structured or unstructured dataset to deliver finished projects fast.
      • Elegant solutions to complex problems - We use modern data science and machine learning tools that make predictive models, classification, segmentation, and natural language processing easier and more powerful than ever before.
    • Leverage our artificial intelligence development skills to build adaptive, intelligent solutions.
      • PhD-trained AI experts - Our expertise in developing and deploying artificial intelligence tools means that you will see results and new opportunities quickly.
      • Integrate algorithms for more power - We leverage the power of a diverse set of machine learning tools - including neural networks, reinforcement learning, gradient boosting, random forests, natural language processing, genetic algorithms, and Bayesian models - to develop effective AI solutions.
      • Learn in real-time from dynamic input - Effective AI tools are more than static, trained models. They must continuously learn and improve over time. That's why we build models that are capable of learning in real-time from ongoing, dynamic input.
    • Build predictive models that classify and quantify new events and customers based on historical data.
      • PhD-trained predictive modeling experts - Our expertise in predictive modeling means that we can design and deliver effective solutions quickly, avoiding the pitfalls of less-experienced teams.
      • Maximize predictive power with the right algorithms - New machine learning tools make developing predictive models easier than ever. But just because something is easy doesn't mean it's good. Our experience enables us to pick the right tool for the job, whether it's a deep convolutional neural network or a linear model.
      • Prediction output where you need it - Intelligent tools like predictive models should fit into your existing infrastructure. Our application development experience allows us to tightly integrate predictive models with your events processors, dashboards, reporting and other components of your infrastructure.
    • Detect anomalies in your analytics and operations using intelligent machine learning algorithms.
      • PhD-trained anomaly detection experts - Our team of data scientists offers your organization a unique, reliable perspective and a critical eye for challenges and opportunities.
      • Real-time monitoring of any data - Anomaly detection techniques allow you to monitor any data - for example, payment data, logging data, or click stream data - for anomalies that signal important events or issues for your operations.
    • Use recent advances in deep learning and neural networks to find structure in noisy, complex input.
      • PhD-trained deep learning experts - Our team of machine learning scientists and engineers is at the forefront of deep learning research with extensive experience in a variety of techniques and industries.
      • Employ powerful deep learning techniques - Deep learning allows you to develop powerful and intelligent neural network models that automatically learn complex representations of data - from image segmentation, to signal classification, medical image diagnostics, language processing, generative modeling, and more.
    • Leverage reinforcement learning to build adaptive, intelligent agents that get smarter over time.
      • PhD-trained reinforcement learning experts - Our team of machine learning scientists and engineers is at the forefront of deep reinforcement learning research with extensive experience in a variety of techniques and industries.
      • Build intelligent solutions with AI - We can help you implement state of the art techniques for reinforcement learning to build solutions that learn, adapt, and interact with their environment.

    Telco System Audits

    The mCentric team are absolute experts in working with the telco standards, protocols and formats. This enables us to recommend the right team of experts to move quickly through a subject matter system audit.

    A typical subject matter area may be accounting and policies management. We have experts on Diameter, Radius and ASN.1 accounting events. We have deep experience on the Gy and Gx protocols. Our engineers have worked with all of the major infrastructure providers systems.

    The team leverages Hadoop and Machine Learning Analytics to identify outliers and anomalies that require investigation.

    Our team can work with data on premises or can take anonymized data (though uniquely identifiable) to to cloud to conduct our analytics.

    We have track-record in quickly identifying revenue leakage and even new revenue opportunities through our system audits.

    Hadoop Upgrade Engagement

    Do you have an existing Hadoop cluster in production and aim to upgrade to the latest version, to take advantage of new features and bug fixes? The Upgrade engagement will involve a mCentric Hadoop expert working alongside your operations and development teams to help prepare for your upgrade. To bring your team up to speed with the new features, sharing best practises and known pitfalls. The goal is to understand your existing deployment and to define a robust upgrade plan that maintains your business service level agreements. Optionally the engagement can include the full execution of the migration or technical assistance with the execution. A specific set of profiles will be assigned to work the migration project. Experts may be from any number of the disciplines; data pipeline processing (Flume, Kafka, Java home-grown or C home-grown), Complex Events processing, Spark Stream processing, Machine learning models migration & validation, Data Lake/ Data Warehouse loading and Expert Analysts specialized reconciliation practices.

    • Have an expert review your production upgrade plan
    • Understand the upgrade process
    • Explore difference between versions
    • Maintain your SLAs

    Hadoop Health Check Engagement

    This engagement is suitable for those preparing to move into production and those already in production with a Kafka cluster. The goal being to have an expert work alongside your technical teams to assess your current Hadoop deployment health and overall capabilities, making recommendations, to meet the ongoing needs of your business. The mCentric Hadoop expert will analyze system metrics, logs alongside reviewing your business use cases and SLAs to understand the whole picture. The engagement considers a number of critical dimensions for Hadoop deployments such as:

    • Scalability
    • Reliability
    • Throughput
    • Latency
    • Hardware / Virtual Machine(s)
    • Monitoring
    • Capacity Estimation
    • Log Management