User Profile

I possess deep expertise in cloud technologies, big data, and AI. I have hands-on proficiency with AWS services (including Glue, Sagemaker, Lambda, SQS, SNS, and Athena), Databricks, Delta Lake, and Hadoop ecosystems (Spark, Kafka, Sqoop). I specialize in building and managing cloud architectures, transforming large datasets, and automating data pipelines using tools such as Terraform, Jenkins, and AWS Batch. Additionally, I have significant experience with Generative AI, leveraging Amazon Bedrock for advanced solutions with models like Amazon Titan and Claude.
My experience includes designing and implementing complex data lake and lakehouse architectures, specifically with Delta Lake, and creating ETL processes using technologies like Informatica, Abinitio, and Databricks. I am skilled in building scalable and efficient data solutions for enterprise environments, working on both cloud-native and hybrid infrastructures. With a strong background in Agile methodologies, I have successfully developed and implemented data models, machine learning models, and real-time streaming solutions.
I have worked on high-impact projects in various sectors, including finance, healthcare, and telecommunications, where I led teams in implementing cutting-edge solutions, optimizing data pipelines, and automating reporting processes. My technical acumen is complemented by my ability to understand business requirements and translate them into actionable technical specifications. Additionally, I have worked extensively with large-scale databases such as Teradata, Oracle, and DynamoDB, while also optimizing performance and ensuring high availability of data solutions.
I hold certifications in AWS Solutions Architect and AWS Data Analytics Speciality, alongside a strong foundation in Java and related technologies.