Data Engineer (AWS)
2002 Summit Blvd Atlanta, GA 30319
As a Data Engineer supporting Enterprise Platforms, and as an Agile team member, you will be responsible for the delivery of strategic, cloud based, analytics data solutions. This role, in partnership with counterpart technology teams, is accountable for the design, development, quality, support, and adoption of production grade data and analytics solutions. A successful Engineer is one who thrives as a collaborative member of a small team: she/he may wear many hats, be asked to solve problems which stretch their current technical knowledge, and is resourceful in ensuring delivery commitments are met.
Technology Stack: AWS (data processing and data repository) and SFDC Einstein Analytics (visualization and presentation). Primary services leveraged within AWS are S3, EMR (Python), EC2, RDS (Oracle), Lambda, Athena, and Redshift.
1. In partnership with Product Owner and Scrum development team members, deliver analytics solutions, including collecting data from providers, building transformations and integrations, persisting within repositories, and distributing to consuming systems
2. Working primarily within AWS, deliver event-driven, data processing pipelines, and ensure data sets are captured, designed, and housed effectively (consistently, optimized for cost, ease of support and maintenance).
3. Transition Minimally Viable Product (MVP) solutions into operationally hardened systems, including introducing re-useable objects and patterns to drive automation, maintainability and supportability.
4. Participate in backlog refinement and request decomposition, including data discovery and data analysis
5. Proactively identify, communicate, and resolve issues and risks that interfere with delivery commitments
6. Self-directed problem solving: research and collaborate with peers to drive technical solutions
7. Rapid response and cross-functional work to resolve technical, procedural, and operational issues
1. A minimum of 3 years of experience delivering analytics, reporting or business intelligence solutions
2. A minimum of 3 years of experience developing in big data technologies (AWS, Hadoop, NoSQL)
3. Proficient in SQL and at least one of these programming language: Java, Scala, Python
4. Experience designing event-driven, data processing pipelines
5. At ease developing data solutions leveraging both databases and file systems via CLI
6. Strong, hands-on technical skills and self-directed problem solving
7. Preferred: Experience developing in Spark (Spark Streaming, Dataframes, Datasets)
8. Preferred: Experience developing within AWS, especially EMR, Data Pipeline, Lambda, Athena, and Redshift
9. Desired: Experience in working on Agile teams (Scrum)
10. Desired: Experience administering or developing within SFDC Sales Cloud CRM, SFDC Einstein Analytics
11. Desired: Experience with data modeling (normalization, slowly changing, star, data vault)
12. Desired: Experience with MPP databases (Teradata, Exadata, Netezza, Redshift)
13. Desired: Foundational understanding of LEAN software development
1. Bachelor’ s degree in Business, Management, Information Systems, Computer Science, or Engineering