US
0 suggestions are available, use up and down arrow to navigate them
PROCESSING APPLICATION
Hold tight! We’re comparing your resume to the job requirements…

ARE YOU SURE YOU WANT TO APPLY TO THIS JOB?
Based on your Resume, it doesn't look like you meet the requirements from the employer. You can still apply if you think you’re a fit.
Job Requirements of Analytics Engineer:
-
Employment Type:
Contractor
-
Location:
Atlanta, GA (Onsite)
Do you meet the requirements for this job?
Analytics Engineer
Careers Integrated Resources Inc
Atlanta, GA (Onsite)
Contractor
Job Title: Analytics Engineer
Location: Atlanta, GA 30334 (Hybrid: Tuesday to Thursday Onsite)
Duration: 10+ Months Contract (Possible extension)
Client Domain: Public Sector
Interview Process: Either Web Cam or In Person
Job Description:
Position Summary:
Under general supervision, combines strong technical skills with knowledge of the database administration. Works on one or more projects of high complexity.
The Analytics Engineer will contribute to our modern data estate strategy by developing scalable data solutions using Microsoft Fabric and Azure Databricks. This role will be instrumental in building resilient data pipelines, transforming raw data into curated datasets, and delivering analytics-ready models that support enterprise-level reporting and decision-making.
Job Responsibilities:
Data Engineering & Pipeline Development
Build and maintain ETL/ELT pipelines using Azure Databricks and Microsoft Fabric.
Implement medallion architecture (Bronze, Silver, Gold layers) to support data lifecycle and quality.
Develop real-time and batch ingestion processes from IES Gateway and other source systems.
Ensure data quality, validation, and transformation logic is consistently applied.
Use Python, Spark, and SQL in Databricks and Fabric notebooks for data transformation.
Delta Lake: Implementing Delta Lake for data versioning, ACID transactions, and schema enforcement.
Integration with Azure Services: Integrating Databricks with other Azure services like Azure One Lake, Azure ADLS Gen2, and Microsoft fabric.
Data Modeling & Curation
Collaborate with the Domain Owners to design dimensional and real-time data models.
Create analytics-ready datasets for Power BI and other reporting tools.
Standardize field naming conventions and schema definitions across datasets.
Data Governance & Security
Apply data classification and tagging based on DECAL s data governance framework.
Implement row-level security, data masking, and audit logging as per compliance requirements.
Support integration with Microsoft Purview for lineage and metadata management.
Data Modeling:
Dimensional modeling
Real-time data modeling patterns
Reporting & Visualization Support
Partner with BI developers to ensure data models are optimized for Power BI.
Provide curated datasets that align with reporting requirements and business logic.
Create BI dashboards and train users.
DevOps & Automation
Support CI/CD pipelines for data workflows using Azure DevOps.
Assist in monitoring, logging, and performance tuning of data jobs and clusters.
Required Qualifications:
Bachelor s degree in computer science, Data Engineering, or related field.
3+ years of experience in data engineering or analytics engineering roles.
Advanced SQL: Proficiency in advanced SQL techniques for data transformation, querying, and optimization.
Hands-on experience with:
o Azure Databricks (Spark, Delta Lake)
o Microsoft Fabric (Dataflows, Pipelines, OneLake)
o SQL and Python (Pandas, PySpark)
o SQL Server 2019+
Familiarity with data modeling, data governance, and data security best practices.
Strong understanding of ETL/ELT processes, data quality, and schema design.
Preferred Skills:
Experience with Power BI datasets and semantic modeling.
Knowledge of Microsoft Purview, Unity Catalog, or similar governance tools.
Exposure to real-time data processing and streaming architectures.
Knowledge of federal/state compliance requirements for data handling
Familiarity with Azure DevOps, Terraform, or CI/CD for data pipelines.
Certifications (preferred):
Microsoft Fabric Analytics Engineer.
Soft Skills:
Strong analytical and problem-solving abilities.
Excellent communication skills for technical and non-technical audiences.
Experience working with government stakeholders.
Skills: Skill Set Required / Desired Amount of Experience 3+ years of experience in data engineering or analytics engineering roles. Required 3 Years Advanced SQL: Proficiency in advanced SQL techniques for data transformation, querying, and optimization. Required 3 Years Azure Databricks (Spark, Delta Lake) Required 3 Years Azure Data Factory, SQL VMs, build and maintain ETL Required 2 Years SQL and Python (Pandas, PySpark) Required 3 Years
Location: Atlanta, GA 30334 (Hybrid: Tuesday to Thursday Onsite)
Duration: 10+ Months Contract (Possible extension)
Client Domain: Public Sector
Interview Process: Either Web Cam or In Person
Job Description:
Position Summary:
Under general supervision, combines strong technical skills with knowledge of the database administration. Works on one or more projects of high complexity.
The Analytics Engineer will contribute to our modern data estate strategy by developing scalable data solutions using Microsoft Fabric and Azure Databricks. This role will be instrumental in building resilient data pipelines, transforming raw data into curated datasets, and delivering analytics-ready models that support enterprise-level reporting and decision-making.
Job Responsibilities:
Data Engineering & Pipeline Development
Build and maintain ETL/ELT pipelines using Azure Databricks and Microsoft Fabric.
Implement medallion architecture (Bronze, Silver, Gold layers) to support data lifecycle and quality.
Develop real-time and batch ingestion processes from IES Gateway and other source systems.
Ensure data quality, validation, and transformation logic is consistently applied.
Use Python, Spark, and SQL in Databricks and Fabric notebooks for data transformation.
Delta Lake: Implementing Delta Lake for data versioning, ACID transactions, and schema enforcement.
Integration with Azure Services: Integrating Databricks with other Azure services like Azure One Lake, Azure ADLS Gen2, and Microsoft fabric.
Data Modeling & Curation
Collaborate with the Domain Owners to design dimensional and real-time data models.
Create analytics-ready datasets for Power BI and other reporting tools.
Standardize field naming conventions and schema definitions across datasets.
Data Governance & Security
Apply data classification and tagging based on DECAL s data governance framework.
Implement row-level security, data masking, and audit logging as per compliance requirements.
Support integration with Microsoft Purview for lineage and metadata management.
Data Modeling:
Dimensional modeling
Real-time data modeling patterns
Reporting & Visualization Support
Partner with BI developers to ensure data models are optimized for Power BI.
Provide curated datasets that align with reporting requirements and business logic.
Create BI dashboards and train users.
DevOps & Automation
Support CI/CD pipelines for data workflows using Azure DevOps.
Assist in monitoring, logging, and performance tuning of data jobs and clusters.
Required Qualifications:
Bachelor s degree in computer science, Data Engineering, or related field.
3+ years of experience in data engineering or analytics engineering roles.
Advanced SQL: Proficiency in advanced SQL techniques for data transformation, querying, and optimization.
Hands-on experience with:
o Azure Databricks (Spark, Delta Lake)
o Microsoft Fabric (Dataflows, Pipelines, OneLake)
o SQL and Python (Pandas, PySpark)
o SQL Server 2019+
Familiarity with data modeling, data governance, and data security best practices.
Strong understanding of ETL/ELT processes, data quality, and schema design.
Preferred Skills:
Experience with Power BI datasets and semantic modeling.
Knowledge of Microsoft Purview, Unity Catalog, or similar governance tools.
Exposure to real-time data processing and streaming architectures.
Knowledge of federal/state compliance requirements for data handling
Familiarity with Azure DevOps, Terraform, or CI/CD for data pipelines.
Certifications (preferred):
Microsoft Fabric Analytics Engineer.
Soft Skills:
Strong analytical and problem-solving abilities.
Excellent communication skills for technical and non-technical audiences.
Experience working with government stakeholders.
Skills: Skill Set Required / Desired Amount of Experience 3+ years of experience in data engineering or analytics engineering roles. Required 3 Years Advanced SQL: Proficiency in advanced SQL techniques for data transformation, querying, and optimization. Required 3 Years Azure Databricks (Spark, Delta Lake) Required 3 Years Azure Data Factory, SQL VMs, build and maintain ETL Required 2 Years SQL and Python (Pandas, PySpark) Required 3 Years
Get job alerts by email.
Sign up now!
Join Our Talent Network!