MIDI Bass Generation
Music generation has been an active research area for a long time. Researchers in the field of music information retrieval (MIR) have proposed different models and algorithms which lay a successful foundation for further developments. At the moment research and industry mainly focus on automatic music composition and generation from scratch. We can already find different apps and programs on the internet which ask for some preferences like genre and tempo and create new music in no time. Although results are suitable for background music in films or if licence-free music is required, the generated music lacks creativity and thus cannot compete with human-made music. Therefore, the goal of this bachelor thesis is to minimize the research gap between generated music and human creativity by developing a program which extends a piano composition with a generated bass line. We aim to generate a bass line, which matches the piano MIDI input, based on deep neural network architectures. At the end a new sound file will combine piano and bass and will help to evaluate the result.