(EXPERIENCED) 'InvestingChannel' : Big Data Engineer @ Hyderabad

InvestingChannel recruitment for experienced in Hyderabad as Big Data Engineer.ME/M.Tech, BCA, BE/B.Tech candidates can apply. Walk-behind for further insight information.

(EXPERIENCED) 'InvestingChannel' : Big Data Engineer @ Hyderabad

InvestingChannel, an innovative digital media company featuring a valuable repertoire of financial websites, award winning content and robust advertising and subscription solutions.

 We are looking for an experienced and passionate Data Engineer with 6+ years of experience in building scalable, high-performance distributed systems that deal with large data volumes. You will be responsible for development work on all aspects of Big Data, data provisioning, modeling, performance tuning and optimization. 

Responsibilities

  • Work closely with business and dev teams to translate the business/functional requirements into technical specifications that drive Big Data solutions to meet functional requirements.
  • Participate in software design meetings and write technical design documents.
  • Design, develop, implement and tune large-scale distributed systems and pipelines that process large volumes of data; focusing on scalability, low -latency, and fault-tolerance in every system built.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies.
  • Maintain application stability and data integrity by monitoring key metrics and improving code base accordingly.
  • Understand & maintain existing codebase by regular re-factoring and applying requested fixes and features.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Should be flexible to learn new technology / required frameworks.

Requirements

  • 6+ years of experience in core Java and Spring Batch.
  • Highly Proficient in SQL. Expertise in writing complex, highly-optimized queries across large data sets.
  • 4+ years of experience in ingestion pipeline and data processing
  • Experience with data governance (Data Quality, Metadata Management, Security, etc.)
  • Hands-on with Parquet formats, AWS Redshift, Spectrum and Athena
  • Experience with multi-data sources and getting it all into a Data Lake.
  • 2+ years of experience in Hadoop technologies
  • Experience in a Linux/UNIX environment is mandatory.
  • Ability to grapple with log files and Unix processes.
  • Strong analytic skills related to working with structured and unstructured datasets.
  • Excellent understanding of Data Structures and Algorithms.
  • Prior exposure to building real time data pipelines would be an added advantage.
  • Experience on data visualization tools is good to have.
  • Experience of working within a fast-paced Agile development process.
  • Excellent problem solving skills.
  • Technology 'Must Haves': Core Java, Spring Batch, Data Structures, Web Services, RDBMS (MySQL/SQL Server),NoSQL, HDFS, MapReduce, Hive, Kafka, Airflow, Spark, Hibernate Framework, Shell Scripting
  • Technology 'Good to Haves': Design Patterns, Jenkins, Python, R

How to apply for this job?

Apply here

What's Your Reaction?

like
0
dislike
0
love
0
funny
0
angry
0
sad
0
wow
0