Database Architect

Category:
Information Technology
Industry:
Oil & Gas
Type:
Contract
Location:
Edmonton, Alberta
Job ID:
#165426
Recruiter:
Mohit Sisodyia

Our Edmonton based client has an immediate need for a Database Architect to join their team for a six month contract opportunity.

 

High-Level Description
As part of the LP TIS Business Applications Services Team, tasks will include positively disrupting core businesses with innovative and impactful products using advanced technologies – this role will help ingest, transform, model, and store clean and enriched data in ready for downstream subscribers. An ideal candidate should be able to design, model and maintain data repositories such as operational stores, data lakes, and analytic repositories. This senior role will contribute to project delivery, ensuring consistent, repeatable business outcomes and a positive client experience.

Specific Accountabilities
• Improves the data availability by acting as a liaison between development teams, operational teams, and business stakeholders
• Investigates and champions simplified integration between solutions related to program areas
• Collects, blends, and transforms data using ETL/ELT tools, database management system tools, and code development to build complex data engineering tasks
• Implements data models, and structures data in business consumption formats and application integration
• Develop integrations and aggregations on data across various warehousing models for BI and data science purposes
• Champions Data as an Asset design with business and development teams
• Champions data governance practices and pursue operational accountability of data
• Design and executes data migrations

Scope/Dimensions
• Role is part of a Development and Operations (DEV/OPS) delivery team
• Business focus in Liquids Pipelines, Pipeline Integrity department
• Project(s) outcome is business process automation using large datasets, and providing data as an asset to stakeholders

Working Relationships
• Business Partners
• Business Leads
• Local Lab Heads
• Lab Accelerator team (team of dedicated functional support (e.g., Finance, HR) for the Lab)
• Digital Leads (e.g., Design Head, Data Science Head)
• Data Scientists
• Centre of Excellence
• Technology Working Groups
• Product Team

Knowledge, Skills & Abilities
Skills: 5+ Years
• Create and maintain optimal data pipeline architecture.
• Assemble large, complex data sets that meet functional / non-functional business requirements.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, data quality checks, minimize Cloud cost, etc.
• Build the infrastructure required for optimal ingestion, transformation, and loading of data from a wide variety of data sources using SQL, Data Bricks, No-SQL, Data Lake Stores
• Design and maintenance of Cloud Data Lake Stores
• Design and implement a semantic tier that utilizes the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
• Keep our data separated and secure across national boundaries through multiple data centers and Azure regions.
• Document and communicate standard methods, best practices and tools used.
• Design and build complex data engineering tasks in conjunction with data scientists
• Work with other data engineers, data ingestion specialist, & subject matter experts across the company to consolidate methods and tool standards where practical.
• We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
• Experience using the following software/tools:
o Experience with big data tools: Hadoop, MS Azure Cloud Stack, & Spark
o Experience with developing and administrating relational SQL and NoSQL databases: MS Azure Databases
o Experience with data pipeline and workflow management tools: Data Bricks (Spark), ADF, Dataflow
o Experience with Microsoft Azure and Azure DevOps
o Experience with object-oriented/object function scripting languages: Python, Scala, SQL, .NET
o Ability to produce formal written deliverables including data models, technical standards and guidelines, investigations and recommendations, presentations and briefings, future state roadmaps, and meta-data catalogs for business and technical use.

Experience in the following would be asset but not required:
o Stream-processing systems: Storm, Streaming-Analytics, IoT Hub, Event Hub
o Data governance platforms

Working Conditions
• Office environment
• Occasional travel to other Lab locations (Calgary)
• Pressurized environment, working to tight deadlines

Looking for meaningful work? We can help!

If you're a technical professional, you know that it can be difficult to find fulfilling work that advances your career. At the Ian Martin Group, we exist to connect professionals like you with meaningful work at industry-leading companies in your field. And we walk the walk, too: as a Certified B Corporation, we believe in using business as a force for good for people, our communities, and the environment.

We value diversity and inclusion and encourage all qualified people to apply. If we can make this easier through accommodation in the recruitment process, please contact us at recruit@ianmartin.com  

We encourage all qualified candidates to apply; however, only those selected for an interview will be contacted

 

EWEMI