About work in our team
The world is changing, becoming digital, and so are we. We are leaving the traditional bank behind us and are choosing to move forward as a digital enterprise. This is exactly why we need talented people who will join us on this journey.
The Sales Analytics cluster as a part of the BDAA key area services the segments corporate customers as well as private and business customers. We combine and consolidate the bank's data in our Data Lake to identify customer needs and transform them to sales opportunities using customized analytical models.
In your role as a (Senior) Big Data Python Developer you are responsible for the new and further development of the cluster's products. You'll use the latest Big Data and Cloud technologies to process large amounts of data on our Data Lake and deliver services on premise or in the cloud. In cross-functional teams and using appropriate technology tools, you generate useful IT products for the entire company. You develop new models with business experts and work hand-in-hand with data scientists to integrate their products and results into the system landscape. The co-design of this system landscape, as well as rapid prototyping, testing and evaluating new technologies is part of your tasks. You always have an eye on the overall picture, realize synergy effects beyond problem solving and create added value for the entire cluster beyond the team.
from the very
and health insurance
- Working in a cross-functional cell
- E2E responsibility of the development process from ideation to handover to production with hand-in-hand cooperation with the operations colleagues
- You will develop new models with business experts
- You will use the latest Big Data and Cloud technologies to process large amounts of data on our Data Lake and deliver services on premise or in the cloud
- 3rd-level support: Solving of incidents, bug fixing and supporting the operation engineers
- Support the whole development stack, e.g. design, coding, package-building, testing, RO-preparation and documentation
- Good knowledge of Python
- Knowledge of Apache Spark
- Knowledge of ETL processes
- Knowledge of SOA, Web services, REST API
- Relational (SQL, Postgres is an asset) and non-relational database solutions
Nice to have:
- Hive / Hbase / HDP
- Experience with the cloud applications such as GCP – Google Cloud Platform (preferred), AWS, Azure or Kubernetes
- Container technology such as Docker
- Experience with Hadoop ecosystem or willingness to learn
- Experience with CI/CD and DevOps setup
- Data warehouse
- Scheduling tools (like Cron, UC4 or similar)
Foreign language skills:
- English B2 level mandatory
- German optional
Please add the following disclaimer to your application:
2. I have read the content of the information clause, including information about the purpose and methods of processing personal data and the right to access to my personal data and about the right to correct, rectify and delete it.