Jakub Tkaczuk
supervisor: Piotr Bilski
Artificial Intelligence techniques and its applications do change how we perceive surrounding worlds.
AI algorithms and engines are becoming a standard feature in novel consumer devices. But it is not all about software and deep network architectures as new applications and use-cases requires new hardware platforms that will enable these new experiences. AI is everywhere. We are used to Conversational Agents, Natural Language Processing and other Voice-driven applications. We do acknowledge that Deep Learning techniques significantly changed Computer Vision area and currently machine-learning driven applications do outperform human capabilities. We are all driven by Recommendation services while not only listening or watching multimedia content but in daily tasks as browsing internet and so on.
But can AI change traditional music consumption experience? Design of speaker transducers did not change since many years. Crossovers still do rely on traditional RLC approach. Amplifiers become more efficient but still we cannot say there are intelligent. In this short poster we will review AI techniques that might contribute to next generation listening experience. We will look into Sound Recognition and Deep Signal Processing techniques and try to combine them into the design of next generation, distributed sound reproduction systems.