Senior Engineer (Data Engineering)

Job Description

If you desire to be part of something special, to be part of a winning team, to be part of a fun team – winning is fun.  We are looking forward to a Senior Engineer (Data Engineering) based in Pune, India. In Eaton, making our work exciting, engaging, meaningful; ensuring safety, health, wellness; and being a model of inclusion & diversity are already embedded in who we are - it’s in our values, part of our vision, and our clearly defined aspirational goals.  This exciting role offers opportunity to: 

  •  The Data Engineer will be involved in the architecture, design, and management of large-scale, highly-distributed, multi-tenant data stores. In addition to building and maintaining these data stores, the Data Engineer will also work to ensure that the data is easily accessible and available to data scientists and business users across the enterprise when and where it is needed.
  • Work with your team and others, contributing to the architecture, design, and management of secure, large-scale, highly-distributed, geo-redundant, multi-tenant data stores
  • Support schema creation for optimized storage coupled with retrieval flexibility while ensuring security
  • Set up, integrate and utilize Big Data tools and frameworks as required for business and data science needs
  • Set up appropriate performance monitoring solutions; author and implement based on the results
  • Demonstrate and document solutions by using flowcharts, diagrams, code comments, code snippets, and performance instruments.
  • Evaluate business requirements to determine potential solutions.
  • Author high-quality, highly-performance, unit-tested code to extract and transform data based on business and data science needs
  • Work directly with stakeholders, engineering, and test to create high quality solutions that solve end-user problems.
  • Provide work estimates and participate in design, implementation, and code reviews
  • Execute agile work plans for iterative and incremental project delivery
  • Explore and recommend new tools and processes which can be leveraged across the data preparation pipeline for capabilities and efficiencies
  • Integrate multiple sources of data through efficient ETL and other workflows
  • Collaborate broadly across multiple functions (data science, engineering, product management, IT, etc.) to readily make key data readily available and easily consumable

Qualifications

Requirement:

  • Bachelor's degree in computer science or software engineering
    • 3+ years’ of progressive experience in developing and designing technology solutions,
    • 2+ years’ of experience in the software industry as a developer, with a proven track record of shipping high quality products,
    • 1+ years of experience preparing big data infrastructures, integrating data from disparate systems, and managing these systems"
    • Ability to write complex queries that are accessible, secure, and perform in an optimized manner with an ability to output to different types of consumers and systems
    • Strong knowledge of Hive (or similar language) to perform ad hoc queries of large datasets
    • Solid understanding of relational and non-relational database systems
    •  Experience with in-memory, file-based and other data stores
    • Solid understanding of Java and/or Python and associated IDE’s (Eclipse, IntelliJ, etc.)
    • Thorough understanding of relational and non-relational database systems
    • Basic understanding of data visualization tools such as Power BI and/or Tableau
    • Extensive experience with Agile development methodologies and concepts
    • Strong problem solving and software debugging skills
    • Experience building APIs to support data consumption needs of other roles
    • Abreast of upcoming software development/engineering tools, trends, and methodologies
    • Good judgment, time management, and decision-making skills
    • Knowledgeable in leveraging multiple data transit protocols and technologies (MQTT, Rest API, JDBC, etc)
    • Knowledge of Hadoop and MapReduce/Spark or related frameworks
    • Knowledge of cloud development platforms such as Azure or AWS and their associated data storage options
    • Knowledge of MongoDB, Document DB, Cosmos DB
    • Knowledge of Scala

    Yes! Because you are the one we are looking for, we hope to hear from you now!

    #LI-AB1

    We make what matters work. Everywhere you look—from the technology and machinery that surrounds us, to the critical services and infrastructure that we depend on every day—you’ll find one thing in common. It all relies on power. That’s why Eaton is dedicated to improving people’s lives and the environment with power management technologies that are more reliable, efficient, safe and sustainable. Because this is what matters. We are confident we can deliver on this promise because of the attributes that our employees embody. We’re ethical, passionate, accountable, efficient, transparent and we’re committed to learning. These values enable us to tackle some of the toughest challenges on the planet, never losing sight of what matters.

    Job: Engineering

    Region: Asia Pacific
    Organization: INNOV Innovation Center

    Job Level: Individual Contributor
    Schedule: Full-time
    Is remote work (i.e. working from home or another Eaton facility) allowed for this position?: No
    Does this position offer relocation?: Relocation from within hiring country only
    Travel: Yes, 10 % of the Time