Lead Engineer - Solution Architect / Data Science

Job Description

If you desire to be part of something special, to be part of a winning team, to be part of a fun team - winning is fun


We are looking forward to a Lead Engineer - Solution Architect / Data Science, in Eaton India Innovation Center’s (EIIC)– Center for Sustainability, based in Pune, India.


In Eaton, making our work exciting, engaging, meaningful; ensuring safety, health, wellness; and being a model of inclusion & diversity are already embedded in who we are - it’s in our values, part of our vision, and our clearly defined aspirational goals.


This exciting role is part of Eaton India Innovation Center (EIIC) within Centre for Intelligent Power (CIP) team.


We Eaton India Innovation Center (EIIC)’s Centre for Intelligent Power (CIP) is looking for highly motivated Lead Solution Architect (Data Science), who is passionate about his or her craft, and poses innate entrepreneurial spirit to explore the uncharted in the cutting edge of science and technology.  The Solution Architect will be involved in building deep learning powered intelligent tools and end-to-end solutions for scenarios in diverse technologies.

He / She will also be involved in the architecture, design, and management of large-scale, highly-distributed, multi-tenant data stores. In addition to building and maintaining these data stores, he/she will also work to ensure that the data is easily accessible and available to data scientists and business users across the enterprise when and where it is needed.

.   

Your essential responsibilities:

  • The candidate will demonstrate exceptional impact in delivering projects in terms of architecture, technical deliverables and project delivery throughout the project lifecycle. The candidate is expected to be conversant with Agile methodologies and tools and have a track record of delivering products in a production environment
  • Work with a team of experts in deep learning, machine learning, distributed systems, program management, and product teams, and work on all aspects of design, development and delivery of deep learning enabled end-to-end pipelines and solutions
  • Lead the development of technical solutions and implement architectures for project and products across data engineering and data science teams
  • Will be able to work in a hands-on fashion with Big Data tools such as Kafka, Cassandra, Hadoop, Time Series Databases such as InfluxDB, KairosDB
  •  Knowledge with microservices, cloud APIS (e.g. AWS and MS Azure)
  • Evaluate business requirements to determine potential solutions
  • Work with your team and others, defining the architecture, design, and management of secure, large-scale, highly-distributed, geo-redundant, multi-tenant data stores.
  • Recommend and set up appropriate performance monitoring solutions; author and implement based on the results
  • Is accountable for end-to end delivery of solutions from requirements gathering to production
  • Author high-quality, highly-performance, unit-tested code to extract and transform data based on business and data science needs
  • Work directly with stakeholders, engineering, and test to create high quality solutions that solve end-user problems.
  • Mentor others in the use of tools and techniques
  • Develop and execute agile work plans for iterative and incremental project delivery, CI and dev/ops
  • Explore and recommend new tools and processes which can be leveraged across the data preparation pipeline for capabilities and efficiencies
  • Collaborate broadly across multiple functions (data science, engineering, product management, IT, etc.) to readily make key data readily available and easily consumable

Qualifications

If you are:

  • Master’s degree or Ph.D. (preferred) in computer science, software engineering, digital signal processing or related field.

  • 10+ years of progressive experience in delivering technology solutions in a production environment

  • 6+ years of experience in the software industry as a developer, with a proven track record of shipping high quality products

  • 4+ years of experience preparing big data infrastructures, integrating data from disparate systems, and managing these systems

  • 4 years working with customers (internal and external) on developing requirements and working as a solutions architect to deliver end-to-end systems to customers in a production environment

  • Excellent communication (verbal, presentation, documentation) skills, working with teams that are geographically dispersed, to produce solutions that satisfy functional and non-functional requirements

  • Ability to specify and write code that is accessible, secure, and performs in an optimized manner with an ability to output to different types of consumers and systems

  • Strong knowledge of big data query tools to perform ad hoc queries of large datasets

  • Solid understanding of relational and non-relational (NoSQL, time-series) database systems

  • Experience with in-memory, file-based and other data stores

  • Solid understanding of Java and/or Python and associated IDE’s (Eclipse, IntelliJ, etc.)

  •  Extensive experience with Agile development methodologies and concepts

  • Strong problem solving and software debugging skills

  • Experience building APIs to support data consumption needs of other roles

  •  Excellent verbal and written communication skills including the ability to effectively explain technical concepts

  • Abreast of upcoming software development/engineering tools, trends, and methodologies

  • Good judgment, time management, and decision-making skills

  • Knowledge of cloud development platforms such as Azure or AWS and their associated data storage options

  •   Experience in Design Thinking or human-centered methods to identify and creatively solve customer needs, through a holistic understanding of customer’s problem area

  • Advanced degree and/or specialization in related descipline (e.g. machine learning)

  • Knowledgeable in leveraging multiple data transit protocols and technologies (MQTT, Rest API, JDBC, etc)

  • Knowledge of Hadoop and MapReduce/Spark or related frameworks

  • Knowledge of MongoDB, Document DB, CosmosDB

     

Yes! Because you are the one we are looking for, we hope to hear from you now!


*We make what matters work.

 

Making what matters work at Eaton takes the passion of every employee around the world. We create an environment where creativity, invention and discovery become reality, each and every day.It’s where bold, bright professionals like you can reach your full potential - and where you can help us reach ours.  Need more about Eaton? Come on in @ www.eaton.com.

We make what matters work. Everywhere you look—from the technology and machinery that surrounds us, to the critical services and infrastructure that we depend on every day—you’ll find one thing in common. It all relies on power. That’s why Eaton is dedicated to improving people’s lives and the environment with power management technologies that are more reliable, efficient, safe and sustainable. Because this is what matters. We are confident we can deliver on this promise because of the attributes that our employees embody. We’re ethical, passionate, accountable, efficient, transparent and we’re committed to learning. These values enable us to tackle some of the toughest challenges on the planet, never losing sight of what matters.

Job: Engineering

Region: Asia Pacific
Organization: INNOV Innovation Center

Job Level: Individual Contributor
Schedule: Full-time
Is remote work (i.e. working from home or another Eaton facility) allowed for this position?: No
Does this position offer relocation?: Relocation from within hiring country only
Travel: Yes, 10 % of the Time