Full Stack Engineer
- Work with your team and others, contributing to the architecture, design, and management of secure, large-scale, highly-distributed, geo-redundant, multi-tenant data stores
- Support schema creation for optimized storage while ensuring security
- Recommend, set up, integrate and utilize Big Data tools and frameworks as required
- Recommend and set up appropriate performance monitoring solutions
- Demonstrate and document solutions by using flowcharts, diagrams, code comments, code snippets, and performance instruments.
- Evaluate business requirements to determine potential solutions.
- Author high-quality, highly-performance, unit-tested code to extract and transform data based on business and data science needs
- Work directly with stakeholders, engineering, and test to create high quality solutions that solve end-user problems.
- Develop and execute agile work plans for project delivery
- Explore and recommend new tools and processes
- Integrate multiple sources of data through efficient ETL and other workflows
- Collaborate broadly across multiple functions (data science, engineering, product management, IT, etc.) to make key data readily available and easily consumable
- A Bachelor's degree in computer science or software engineering
- 5+ years’ of progressive experience in developing and designing technology solutions,
- 3+ years’ of experience in the software industry as a developer, with a proven track record of shipping high quality products,
- 2+ years of experience preparing big data infrastructures, integrating data from disparate systems, and managing these systems"
- Ability to write complex queries that are accessible, secure, and perform in an optimized manner with an ability to output to different types of consumers and systems
- Strong knowledge of Hive (or similar language) to perform ad hoc queries of large datasets
- Solid understanding of relational and non-relational database systems
- Experience with in-memory, file-based and other data stores
- Solid understanding of Java and/or Python and associated IDE’s (Eclipse, IntelliJ, etc.)
- Basic understanding of data visualization tools such as Power BI and/or Tableau
- Extensive experience with Agile development methodologies and concepts
- Strong problem solving and software debugging skills
- Experience building APIs to support data consumption needs of other roles
- Time management, and decision-making skills
- Excellent communication skills
- Knowledgeable in leveraging multiple data transit protocols and technologies (MQTT, Rest API, JDBC, etc)
- Knowledge of Hadoop and MapReduce/Spark or related frameworks
- Knowledge of cloud development platforms such as Azure or AWS and their associated data storage options
- Knowledge of MongoDB, Document DB, CosmosDB
- Knowledge of Scala
We make what matters work. Everywhere you look—from the technology and machinery that surrounds us, to the critical services and infrastructure that we depend on every day—you’ll find one thing in common. It all relies on power. That’s why Eaton is dedicated to improving people’s lives and the environment with power management technologies that are more reliable, efficient, safe and sustainable. Because this is what matters. We are confident we can deliver on this promise because of the attributes that our employees embody. We’re ethical, passionate, accountable, efficient, transparent and we’re committed to learning. These values enable us to tackle some of the toughest challenges on the planet, never losing sight of what matters.
Region: Europe, Middle East, Africa
Organization: CTO Corporate Technology Office
Job Level: Individual Contributor
Is remote work (i.e. working from home or another Eaton facility) allowed for this position?: No
Does this position offer relocation?: No