Job DescriptionLead Data Engineer (Director) - Individual contributor - Azure, Data Factory, Databricks, ApacheSpark - London Based I am hiring for a Lead Data Engineer for a crucial role within one of my Investment Bank clients in London. This role is at Director level as they require a very senior … Responsibilities: Leading data engineering practicesSupport current applications Introduce AI practices to the team/project Communicate key successes with stakeholders Key Skills: Azure Databricks ApacheSpark Datascience, AI, ML Certifications or continued upskilling/contribution to blog posts within Data & AI beneficial but not essential. This is a … UK without sponsorship, if you are interested please apply or email me directly - aaron.dhammi@nicollcurtin.comLead Data Engineer (Director) - Individual contributor - Azure, Data Factory, Databricks, ApacheSpark - London Based more »
Lead Data Engineer (Director) - Individual contributor - Azure, Data Factory, Databricks, ApacheSpark - London Based I am hiring for a Lead Data Engineer for a crucial role within one of my Investment Bank clients in London. This role is at Director level as they require a very senior candidate … Leading data engineering practices Support current applications Introduce AI practices to the team/project Communicate key successes with stakeholders Key Skills: Azure Databricks ApacheSpark Datascience, AI, ML Certifications or continued upskilling/contribution to blog posts within Data & AI beneficial but not essential. This is a … without sponsorship, if you are interested please apply or email me directly - aaron.dhammi@nicollcurtin.com Lead Data Engineer (Director) - Individual contributor - Azure, Data Factory, Databricks, ApacheSpark - London Based more »
Spark Architect/SME Contract Role- 6 months to begin with & its extendable Location: Leeds, UK (min 3 days onsite) Context: Legacy ETL code for example DataStage is being refactored into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. … Skills: Spark Architecture – component understanding around Spark Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. Spark SME – Be able to analyse Spark code failures through Spark Plans and make correcting recommendations. Spark SME – Be able to review PySpark … and Spark SQL jobs and make performance improvement recommendations. Spark – SME Be able to understand Data Frames/Resilient Distributed Data Sets and understand any memory related problems and make corrective recommendations. Monitoring – Be able to monitor Spark jobs using wider tools such as Grafana to see more »
prem solutions to the cloud, including re-architecting Prior experience working on data focused projects e.g. data warehousing, big data, data streaming Proficiency with Apache Kafka, ApacheSpark, Apache Flink etc. We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless more »
Certified Solutions Architect, AWS Certified Data Analytics Specialty, or AWS Certified Big Data Specialty. Experience with other big data and streaming technologies such as ApacheSpark, Apache Flink, or Apache Beam. Knowledge of containerization and orchestration technologies such as Docker and Kubernetes. Experience with data lakes more »
workplace where each employee's privacy and personal dignity is respected and protected from offensive or threatening behaviour including violence and sexual harassment Role: ApacheSpark Application Developer Skills Required: Hands on Experience as a software engineer in a globally distributed team working with Scala, Java programming language … preferably both) Experience with big-data technologies Spark/Databricks and Hadoop/ADLS is a must Experience in any one of the cloud platform Azure (Preferred), AWS or Google Experience building data lakes and data pipelines in cloud using Azure and Databricks or similar tools. Spark Developer more »
data engineering or a similar role. > Proficiency in programming languages such as Python, Java, or Scala. > Strong experience with data processing frameworks such as ApacheSpark, Apache Flink, or Hadoop. > Hands-on experience with cloud platforms such as AWS, Google Cloud, or Azure. > Experience with data warehousing more »
working closely with our product teams on existing projects and new innovations to support company growth and profitability. Our Tech Stack Python Scala Kotlin Spark Google PubSub Elasticsearch Bigquery, PostgresQL Kubernetes, Docker, Airflow Ke y Responsibilities Designing and implementing scalable data pipelines using tools such as ApacheSpark … Data Infrastructure projects, as well as designing and building data intensive applications and services. Experience with data processing and distributed computing frameworks such as ApacheSpark Expert knowledge in one or more of the following languages - Python, Scala, Java, Kotlin Deep knowledge of data modelling, data access, and more »
data components such as Azure Data Factory, Azure SQL DB, Azure Data Lake, etc. Strong Python and SQL skills for data manipulation Experience with ApacheSpark and/or Databricks. Familiarity with BI visualization tools like Power BI Experience in managing end-to-end analytics pipelines (batch and … such as Azure Data Engineer Associate are desirable. Knowledge of data ingestion methods for real-time and batch processing Proficiency in PySpark and debugging ApacheSpark workloads. What’s in it for you? Annual bonus scheme – up to 10% Excellent pension scheme Flexible working Enhanced family friendly policies more »
comfortable designing and constructing bespoke solutions and components from scratch to solve the hardest problems. Adept in Java, Scala, and big data technologies like Apache Kafka and ApacheSpark, they bring a deep understanding of engineering best practices. This role involves scoping and sizing, and indeed estimating … be considered. Key responsibilities of the role are summarised below Design and implement large-scale data processing systems using distributed computing frameworks such as Apache Kafka and Apache Spark. Architect cloud-based solutions capable of handling petabytes of data. Lead the automation of CI/CD pipelines for more »
development (ideally AWS) Knowledge and ideally hands-on experience with data streaming, event-based architectures and Kafka Strong communication and interpersonal skills Experience with ApacheSpark or Apache Flink would be ideal, but not essential Please note, this role is unable to provide sponsorship. If this role more »
in cloud development (ideally AWS)Knowledge and ideally hands-on experience with data streaming, event-based architectures and KafkaStrong communication and interpersonal skillsExperience with ApacheSpark or Apache Flink would be ideal, but not essentialPlease note, this role is unable to provide sponsorship.If this role sounds of more »
to develop unit test cases. Help in backlog grooming. Key skills: Extensive experience in developing Bigdata pipelines in cloud using Bigdata technologies such as ApacheSpark Expertise in performing complex data transformation using Spark SQL queries Experience in orchestrating data pipelines using Apache Airflow Proficiency in more »
to develop unit test cases. Help in backlog grooming. Key skills: Extensive experience in developing Bigdata pipelines in cloud using Bigdata technologies such as ApacheSpark Expertise in performing complex data transformation using Spark SQL queries Experience in orchestrating data pipelines using Apache Airflow Proficiency in more »
Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as ApacheSpark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands more »
at scale utilising the best breed of Cloud services and technologies. So, what tools and technologies will you be using? AWS Python Databricks/Spark Trino Airflow Docker CloudFormation/Terraform SQL/NoSQL We provide you with the opportunity to think freely and work creatively and right now … Other skills we are looking for you to demonstrate include: Experience of data storage technologies: Delta Lake, Iceberg, Hudi Sound knowledge and understanding of ApacheSpark, Databricks or Hadoop Ability to take business requirements and translate these into tech specifications Knowledge of Architecture best practices and patterns Competence more »
Greater Bristol Area, United Kingdom Hybrid / WFH Options
Anson McCade
and product development, encompassing experience in both stream and batch processing. Designing and deploying production data pipelines, utilizing languages such as Java, Python, Scala, Spark, and SQL. In addition, you should have proficiency or familiarity with: Scripting and data extraction via APIs, along with composing SQL queries. Integrating data more »
pipelines Know your way around Unix based operating system Experience working with any major cloud provider (AWS, GCP, Azure) Fluency in English Experience using Apache Airflow Experience using Docker Experience using ApacheSpark Benefits: Salary £40-50K per annum dependant on skills and experience 25 Days more »
Cheltenham, Gloucestershire, United Kingdom Hybrid / WFH Options
Third Nexus Group Limited
and product development, encompassing experience in both stream and batch processing. · Designing and deploying production data pipelines, utilizing languages such as Java, Python, Scala, Spark, and SQL. In addition, you should have proficiency or familiarity with: · Scripting and data extraction via APIs, along with composing SQL queries. · Integrating data more »
run on AWS and soon Azure, with plans to also add GCP and on-prem. They are adding extensive usage of distributed compute on Spark, starting with their more complex ETL and advanced analytics functions, e.g. Time Series Processing. They soon plan to integrate other approaches, including native distributed … PyTorch/Tensorflow, Spark-based distributor libraries, or Horovod. TECH STACK: Python, Flask, Redis, Postgres, React, Plotly, Docker. Temporal; AWS Athena SQL, Athena & EMR Spark, ECS Fargate; Azure Synapse/Data Lake Analytics, HDInsight. KEY RESPONSIBILITIES Lead the productionisation of Monolith’s ML models and data processing pipelines … both mid-low-level system and design and exemplary hands-on implementations using Spark and other tech stacks Shape the ML engineering culture and practices around model & data versioning, scalability, model benchmarking, ML-specific branching & release strategy Concisely break down complex high-level ML requirements into smaller deliverables (epic more »
Terraform/Docker/Kubernetes. Write software using either Java/Scala/Python . The following are nice to have, but not required - ApacheSpark jobs and pipelines. Experience with any functional programming language. Database design concepts. Writing and analysing SQL queries. Application overVIOOH Our recruitment team more »
Manchester, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »
Bristol, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »