Microsoft Azure, is recommended as Microsoft Fabric is integrated with Azure services Experience of designing robust, secure and compliant platform capabilities Strong understanding of ApacheSpark, including its architecture, components, and how to create, monitor, optimize, and scale Spark jobs. Experience of working in an DevOps/ more »
ETL/ELT tools.Experience with NoSQL type environments, Data Lakes, Lake-Houses (Cassandra, MongoDB or Neptune).Experience with distributed storage, processing engines such as Apache Hadoop and Apache Spark.Experience with message brokering/stream processing services such as Apache Kafka, Confluent, Azure Stream Analytics.Experience in Test Driven more »
objectives. So each team leverages the technology that fits their needs best. You’ll see us working with data processing/streaming frameworks like Apache Flink and Spark; Database technologies like MySQL, PostgreSQL, DynamoDB and Redis; and breaking things using in-house chaos principles and tools such as … latency, near real-time products: Java and Scala based Web Services, Databricks Data Lakes (Delta Lakes), AWS Kinesis and MSK, AWS ElasticSearch, AWS RDS, Apache Flink & Spark, scripting using Python, Terraform’s infrastructure as a code. The interview process Our interview aims to take a relaxed & practical approach more »
and classification techniques, and algorithms Fluency in a programming language (Python, C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
language, ideally Python but can also be Java or C/c++ SQL expeirence Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau) Get in touch with Ella Alcott - Ella@engagewithus.com more »
quality of data. Key Requirements: Strong experience designing data pipelines/warehouses using AWS and Snowflake. Exposure to big data technologies such as Kafka, Spark, or Hadoop. Solid experience with Snowflake, including performance optimisation and cost management. Strong experience with SQL and Data modelling. Excellent understanding of AWS architecture more »
Surrey, England, United Kingdom Hybrid / WFH Options
Hawksworth
working in the world of Data Science You're more than capable with SQL & Python You have exposure to big data technologies such as Spark Ideally you will have experience with statistical analysis, machine learning algorithms, and data mining techniques You have excellent communication skills and can communicate well more »
ownership • Python, Java or similar (Ruby or Node) or another Functional Language • JavaScript and associated frameworks, preferably Vue, or similar • Cloud technologies • SQL (advantageous) • Spark (advantageous) • Docker/Kubernetes – advantageous ) • MongoDB, SQL, Postgres & Snowflake (advantageous) • Developing online, cloud based SaaS products. • Leading and building scalable architectures and distributed systems more »
dynamic. Knowledge and understanding of OTC products (Interest Rate Swaps, Variance Swaps, CDS, etc.) bookings. Familiarity with C++ and Big Data tools such as Spark, Kafka, Elastic. Join us and be part of a team that values innovation, collaboration, and excellence. Take your career to new heights with a more »
multiple tasks and projects simultaneously. Preferred Qualifications AWS Certified Solutions Architect or other relevant AWS certifications. Experience with big data technologies such as Hadoop, Spark, or similar. Knowledge of data governance and data quality best practices. Familiarity with machine learning and AI concepts and tools. more »
in data science, preferably within the energy or utilities sector. Technical Skills: Proficiency in Python, R, SQL, and big data technologies such as Hadoop, Spark, or Kafka. Experience with machine learning frameworks such as TensorFlow or PyTorch. Analytical Skills: Strong problem-solving skills with the ability to derive meaningful more »
to 10% What Will Help You On The Job Familiarity with running software services at scale AWS Infrastructure, Airflow, Kafka and data streaming using Spark/Scala Understanding of networking fundamentals (OSI layers 2-7) Technical and software engineering background in the areas of cloud computing, enterprise computing, serversand more »
as Tableau, Power BI and Sigma. • Experience with programming languages such as Python, R, and/or Julia. • Familiarity with data processing frameworks like Spark or Hadoop is a plus. • Solid understanding of statistical analysis techniques, data mining methods, and machine learning algorithms. • Strong analytical and problem-solving skills more »
Reigate, England, United Kingdom Hybrid / WFH Options
esure Group
refining them to strong results. Exposure to Python data science stack Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with Databricks, Spark and geospatial data/modelling are a plus. We’ll help you gain… Experience working in a high-performance environment where collaboration and business more »
space of data and AI technologies and business scenarios. Strong understanding of cutting edge and legacy Big Data and AI technologies such as Hadoop, Spark, OpenAI and Claude as well as architectures and domains such as Computer Vision, NLP, Neural Networks, Machine Learning, Generative AI, Data Warehouse and LakeHouse more »
platforms, Azure is desirable Software development experience is desirable Data architecture knowledge is desirable API design and deployment experience is desirable Big data (e.g. Spark) experience is desirable NoSQL DB experience is desirable Qualifications 2+ years of data science experience Right to work in the UK and/or more »
Skills & Experience At least 10 years experience working with JavaScript or Python/Java Previous experience deploying Software into the Cloud EKS, Docker, Kubernetes ApacheSpark or NiFi Microservice architecture experience Experience with AI/ML systems more »
in Computer Science, Engineering (or other related STEM subject) 5+ years experience in data engineering 2+ years in a leadership role. Experience working with ApacheSpark, Azure Data Factory and other data pipelines tools. Strong programming skills. Impeccable communication skills. Precise attention to detail. Pioneering attitude. If you more »
City of London, London, United Kingdom Hybrid / WFH Options
TECHNOLOGY RECWORKS LIMITED
sponsors) Knowledge and experience of the following would be advantageous: Knowledge of Enterprise Architecture Frameworks Good knowledge of Azure DevOps Pipelines Strong experience in ApacheSpark framework Previous experience in designing and delivering data warehouse and business intelligence solutions using on-premises Microsoft stack (SSIS, SSRS, SSAS) Knowledge more »
Manchester, North West, United Kingdom Hybrid / WFH Options
TECHNOLOGY RECWORKS LIMITED
sponsors) Knowledge and experience of the following would be advantageous: Knowledge of Enterprise Architecture Frameworks Good knowledge of Azure DevOps Pipelines Strong experience in ApacheSpark framework Previous experience in designing and delivering data warehouse and business intelligence solutions using on-premises Microsoft stack (SSIS, SSRS, SSAS) Knowledge more »
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.Our Commitment to Diversity and InclusionAt Databricks, we are more »
they are on the lookout for 2 AWS Data Engineers to come in on a contract basis. Key Skills/Requirements: Must have Python & Spark experience Must have strong AWS experience Must have Terraform experience SQL & NoSQL experience Have built out Data Warehouses & built Data Pipelines Strong Databricks & Snowflake more »
on experience.Requirements:Experience with a JVM language, Kotlin, Java, Scala, ClojureKnowledge of Typescript and React is beneficialExposure to data pipelines using technologies such as Spark and KafkaExperience with cloud services (ideally AWS)Hybrid working 1-2 days per week in Central London.110,000 depending on experience.Please apply and I more »