Jobs at SRKay Consulting Group

Big Data/ AI Engineer - SRKay Consulting Group

  • Kuala Lumpur, Kuala Lumpur, Malaysia
  • Apply by: Jan 01, 2026
  • 1 Vacancy
  • Local Candidates (Malaysia)
  • Shift: First Shift (Morning)
  • Career Level: Experienced Professional
  • Degree: Graduate
  • Experience: Year
  • Full Time/Permanent
  • Work from Office
Hurry Up!! Already 4 People Have Applied For This Job
Job is expired
Job Description

Job description:

 

· Total IT / development experience of 7+ years

· Strong hands-on experience in Big data Analysis Techniques and Statistical models and various data analysis tools

· Strong experience in applied statistics skills, such as distributions, statistical testing, regression, etc. Mathematical background in linear algebra.

· Excellent understanding of Hadoop.

· Experience with Databricks

· Excellent scripting and programming skill preferably in Python 3

· Experience building data pipelines for batch and stream processing systems.

· Experience with Spark, various messaging systems, such as Kafka or RabbitMQ

· Experience with SQL and NoSQL databases such as HBase, Cassandra or MongoDB

· Experience with classification techniques

· Experience with BigData ML toolkits like Mahout, Spark ML

· Good knowledge of Bid data querying tools like Pig, Hive etc.

· Proficiency with Hive-QL & able to Analyze, Develop and Debug the Hive Scripts on his own.

· Candidate should have Data Processing ability (ETL techniques) using hive scripting experience.

· Candidate MUST NOT be limited to Data Migration capability from legacy DB to Hadoop Cluster

· Proficient with Partitioning, Analytical aggregation and dealing with large tables.

· Deep knowledge of Spark and Scala libraries to develop and debug complex data engineering challenges

· Experience in developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies

· Exposure in deploying on Cloud platforms

· At least 3 years of development experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Scala-Spark

· At least 3 years of development experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS

· At least 4 years of developing applications in Agile with Monitoring, Build Tools, Version Control, Unit Test, TDD, CI/CD, Change Management to support DevOps

· At least 6 years of development experience on Data technologies

 

Good to have

 

· Hands-on experience in SAS toolset / statistical modelling migrating to Machine Learning models

· Banking Risk, Fraud or Digital Marketing Machine Learning models and use cases

· ETL / Data Warehousing and Data Modelling experience prior to Big Data experience

· Knowledge on AWS stack for big data and machine learning

Benefits
. Medical