Our market leading energy client has an immediate need for a Data Engineer to join their team.
- Role is part of a Development and Operations (DEV/OPS) delivery team.
- Business focus in Liquids Pipelines, Pipeline Integrity department.
- Project(s) outcome is business process automation using large datasets.
The successful candidate in this role:
- Improves the data availability as a liaison between Lab teams and source systems.
- Collects, blends, and transforms data using ETL tools, database management system tools, and code development.
- Implements data models and structures data in ready-for business consumption formats.
- Performs aggregations on data across various warehousing models (e.g. OLAP cubes, star schemas…) for BI purposes.
- Interacts with business teams and can understand how data needs to be structured for consumption.
- Creates and maintains optimal data pipeline architecture.
- Assembles large, complex data sets that meet functional / non-functional business requirements.
- Identifies, designs, and implements internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, data quality checks, minimize Cloud cost, etc.
- Builds the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Data Bricks, No-SQL.
- Builds analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Keeps our data separated and secure across national boundaries through multiple data centers and Azure regions.
- Documents and communicates standard methods, best practices and tools used.
- Works with other data engineers, data ingestion specialist, & subject matter experts across the company to consolidate methods and tool standards where practical.
Skills & Qualifications:
- We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
- Experience using the following software/tools:
- Experience with big data tools: Hadoop, HDI, & Spark.
- Experience with relational SQL and NoSQL databases, including COSMOS.
- Experience with data pipeline and workflow management tools: Data Bricks (Spark), ADF, Dataflow.
- Experience with Microsoft Azure.
- Experience with stream-processing systems: Storm, Streaming-Analytics, IoT Hub, Event Hub.
- Experience with object-oriented/object function scripting languages: Python, Scala, SQL.
- Ability to produce formal written deliverables including data models, technical standards and guidelines, investigations and recommendations, presentations and briefings, future state roadmaps, and meta-data catalogs for business and technical use.
If you're a technical professional, you know that it can be difficult to find fulfilling work that advances your career. At the Ian Martin Group, we exist to connect professionals like you with meaningful work at industry-leading companies in your field. And we walk the walk, too: as a Certified B Corporation, we believe in using business as a force for good for people, our communities, and the environment.
We value diversity and inclusion and encourage all qualified people to apply. If we can make this easier through accommodation in the recruitment process, please contact us at firstname.lastname@example.org
We encourage all qualified candidates to apply; however, only those selected for an interview will be contacted.