Data Engineer (Databricks + Tableau)

Experience: 5 to 8 years
Location: Bengaluru, Gurgaon, Pune
Job code: 101436
Posted on: Feb 18, 2026

About Us:
AceNet Consulting is a fast-growing global business and technology consulting firm leveraging a consultative approach, deep domain expertise and strong technology capabilities across business transformation, IT strategy & architecture, digital transformation, data engineering & analytics, AI/ML, cloud & infrastructure, enterprise applications and emerging technologies to deliver value to our global clients. AceNet’s marquee clients include Tier-1 and Tier-2 banking & financial services, asset & wealth management, healthcare, consumer retail, eCommerce & logistics, engineering, government & public sectors, consulting and technology firms. With presence across Abu Dhabi UAE, Texas USA and India (Bangalore, Gurgaon & Pune), AceNet brings strong consulting and delivery capabilities across project staffing, managed services, outsourcing and offshoring.

Job Summary:
We are looking for a skilled Data Engineer with strong expertise in Databricks, PySpark, SQL, and Tableau to design, build, and optimize scalable data pipelines and analytics solutions. The ideal candidate should have hands-on experience in data lake preparation, data automation, and building interactive dashboards. Strong programming skills in Python along with exposure to web frameworks like Fast API or Flask are essential.

Key Responsibilities:
*Design, develop, and maintain scalable data pipelines using Databricks and PySpark.
*Perform data lake preparation, transformation, and optimization for analytics and reporting.
*Develop and optimize complex SQL queries for data extraction and reporting.
*Build interactive dashboards and visualizations using Tableau.
*Implement data automation solutions (Bedrock or similar platforms) to streamline data workflows.
*Develop APIs and lightweight data services using FastAPI or Flask.
*Work closely with business stakeholders and analytics teams to understand reporting requirements.
*Ensure data quality, integrity, governance, and security standards.
*Monitor, troubleshoot, and optimize performance of data workflows.

Role Requirements and Qualification:
*Strong experience in Python for Data Engineering, including data processing and visualization (Plotly Dash).
*Hands-on experience in PySpark and distributed data processing.
*Strong experience in SQL, including query optimization and performance tuning.
*Experience in Tableau for dashboard and report development.
*Hands-on experience with Databricks and data lake environments.
*Experience in building APIs using FastAPI or Flask.
*Strong understanding of ETL/ELT processes and modern data architecture.

Why Join Us:
*Opportunities to work on transformative projects, cutting-edge technology and innovative solutions with leading global firms across industry sectors.
*Continuous investment in employee growth and professional development with a strong focus on up & re-skilling.
*Competitive compensation & benefits, ESOPs and international assignments.
*Supportive environment with healthy work-life balance and a focus on employee well-being.
*Open culture that values diverse perspectives, encourages transparent communication and rewards contributions.

How to Apply:
If you are interested in joining our team and meet the qualifications listed above, please apply and submit your resume highlighting why you are the ideal candidate for this position.