Data Engineer in Noida, IN at CareerBuilder

Date Posted: 5/2/2022

Job Snapshot

Job Description

Office Locations: Bangalore, India and Noida, India 

The Data team in CareerBuilder enables the fundamental capabilities required for CareerBuilder’s Suite of Products including the Talent Discovery Platform which supports 140M+ profiles powered by AI/ML capabilities. The Data Warehouse (DW) team which is part of the Data team provides Data and Information as a Service to Internal teams in CareerBuilder such as Marketing, Sales, Finance, Product Engineering, Quality Assurance and Customer Success, as well as the Analytics capabilities required by
the CB Analytics Application used by CareerBuilder Customers. These Services offered by the DW team encompass dimensional modeling, data loading through ETL (Extract Transform and Load) Jobs, building KPIs and creating reports both for Business Intelligence as well as Analytics.


The Data Warehouse currently processes 40 Tb of data, with an Architecture that includes Big Data Technologies (HDFS, EMR Clusters), Analytical Databases (AWS-Redshift), Object Storage Systems (AWS S3), Microsoft SQL Server, Microsoft SSIS (SQL Server Integration Services) and SSRS (SQL Server Reporting Services), Tableau and Google BigQuery. The Reporting layer provides Business Intelligence to our internal consumers as well as insights to CareerBuilder customers through innovative web based as well as mobile native visualizations.


We are looking for a highly skilled Data Engineer that can work with the Developers from the Data warehouse Team to build a next generation data pipeline using Big Data Technologies, that enables faster analytical processing, with the goal of serving both Batch as well as near Real time Analytic needs.


Note: The expectation for this position is a Data Engineer who is well versed in one or more programming languages used in Data Engineering (for example Python, Scala etc.,) as well as SQL to build next generation data pipelines, using Big Data Technologies, Programmatic frameworks, and Cloud environments. While knowledge of SQL and knowing Declarative frameworks (for example SSIS, Informatica etc.,) could help, the expectation for this position is engineers who can code.

Essential Responsibilities:

  •  Design, build and launch extremely efficient & reliable data pipelines to move data from a variety of sources (SQL, NoSQL, Streams etc.,) to Targets (Data Warehouse, Data Lakes etc.,).
  • Working with DW Developers and DW Subject area experts, architect pipelines that deliver data models required for both Batch as well as Real time analytics.
  • Develop data pipelines that can scale massive datasets and large clusters of machines.
  • Leverage expert coding skills in several languages (for example Python, Scala, Java) and modern technologies to build pipelines, that are fault tolerant, catch Data Quality issues, easy to troubleshoot and have auditing capabilities.
  • Design and develop new systems and tools to enable end users to consume and understand data faster.
  • Work across multiple teams in high visibility roles and own the solution end-to-end.

Job Requirements:

Job Requirements:

  •  At least 8 years of demonstrable experience as a Data Engineer building data pipelines.
  • At least 8 years of Hands-on experience in one or more Programming languages (for example: Python, Scala, Java etc.,) and applying those skills in large scale data and analytic processing that interfaces with Data Warehouses and Data Lakes.
  • Experience in one or more Big Data technologies (Distributed Computing platforms such as Hadoop and Spark, NoSQL databases, Distributed Real-time Systems, Big Query, Apache Hive)
  • Experience in one or more AWS technologies, for example AWS-EMR, AWS-S3, AWS-Lamda etc.,
  • Proficiency in writing complex SQL queries.
  • Proficient in working with NoSQL databases.
  • Analytical mind with a problem-solving aptitude.
  • Proven abilities to take initiative and provide innovative solutions.
  • Good written and verbal skills and an effective communicator. Be a team player with strong empathy for our internal as well as external customers.
  • Self-learner with a bent of mind to quickly pick up and learn recent technologies.

Preferred Skills and Abilities:

  • Experience in building data pipelines using programmatic frameworks (for example Apache Airflow, Apache Spark etc.,) in addition to declarative frameworks.
  • Experience in reviewing existing data pipelines, identifying bottlenecks, and making noticeable architectural improvements to them.
  • Experience in building Real time Analytics, leveraging Streaming Technologies (for example Apache Kafka).
  • Experience in Object Oriented software development.
  • Ability to identify and adopt Open-Source Libraries and integrate them with existing systems, based on requirements

Benefits and Perks

Connecting people with meaningful work is one of the most important things anyone can do – which means we need to support the employees who make that possible. CareerBuilder’s team enjoys a host of perks and benefits, including: 

  • Group Health Insurance – Acko General
  • Group Personal Accidental Insurance – Acko General
  • Group Term Life Insurance – Tata Aia Life Insurance
  • Retirement Plan – Provident Fund
  • Retirement Plan – Group Gratuity Plan (Ggp)
  • Time off
    • Holidays
    • Casual Leaves
    • Sick Leaves
    • Maternity Leave
    • Paternity Leave
    • Compassionate Leave
  • Employee Referral Program
  • Remote Flexibility
  • Rewards & Recognition Program
  • Virtual Employee Events

TSR ID: 002527

CHECK OUT OUR SIMILAR JOBS

  1. Software Engineer Jobs
  2. Project Engineer Jobs