PhD opportunities in Machine Learning and Digital Content QoE

QxLab is participating in two Science Foundation Ireland centre’s for research training. These centres will recruit cohorts of students in innovative, industry partnered, research training programmes.

If you are interested in machine learning for multimedia quality of experience or health applications for quality of life, apply via to the ML-Labs Centre. If you are interested in speech and audio applications for Augmented or Virtual Reality take a look at the D-Real Centre.

The ML-Labs and D-Real centre’s for research training are recruiting now with the first cohorts starting in September 2019.

This is the biggest single funding scheme for a cohort focused PhD training programme in Ireland with an investment by SFI of €100 million and QxLab is part of two of the 5 training centres.

Media Coverage: Silicon Republic | Irish Tech News | Business World

 

AES Ireland Section AGM at UCD

The first AES Ireland Section meeting will take place in room B1.09 in the Computer Science building at UCD on Friday, February 15th. This meeting will begin with a lecture at 17:00 by Dr. Andrew Hines (details below) followed by the first AGM at 18:00.

If anyone would like to put themselves forward for election to any of the committee roles, please contact Enda Bates (e.bates@tcd.ie). Please note: you must be an AES member to qualify for these roles but non-members are welcome to attend.

Speaker: Dr Andrew Hines, Assistant Professor, School of Computer Science, University College Dublin (qxlab.ucd.ie)

Title: Quality Assessment for Compressed Ambisonic Audio

Description:

Spatial audio with a high degree of sound quality and accurate localization is essential for creating a sense of immersion in a virtual environment. VR content creators can use spatial audio to attract the audiences attention in relation to their story or to guide the audience through a narrative in VR, relying on hearing something to focus our attention before we see it. Delivering spatial audio over networks requires efficient encoding techniques that could compress the raw audio content without compromising quality. Lossy compression schemes such as Opus typically reduce the amount of data to be sent by discarding some information. This discarded information can be important for ambisonic spatial audio in regards to listening quality or localization accuracy. Streaming service providers such as YouTube typically transcode uploaded content into various bit rates and need a perceptually relevant audio quality metric to monitor users’ perceived quality and spatial localization accuracy. This talk will present subjective listening test experiments that explore the effect of Opus codec compression on the quality as perceived by listeners. It will also introduce a full reference objective spatial audio quality metric, AMBIQUAL, which derives both listening quality and localization accuracy metrics directly from the spatial audio B-format ambisonic audio.

What do you tell a room full of PhD students?

When I was asked to give the talk I went through many of the stages of the “PhD roller-coaster” compressed into several hours. I accepted the request without reflection (other than a “sure, that needs no preparations…”) and then panicked that my PhD experiences were stale and possibly no-longer relevant. Then I reflected that I wasn’t being asked to advice through the lens of a student, the actual question was what advice could I offer as someone who as experienced both sides of the student-advisor relationship. Getting the research question right was an important first step. Next I read a few other blogs, papers and tweets. There is already a large body of work in the area of PhD advice so I decided to skip the exhaustive literature review and to provide a case study style approach focusing purely on my own experience.

Having “mastered the topic” (or at least as much as I was going to!), I scribbled a few notes on areas I thought I might want to cover: literature review, self-management, research network building, developing your identity as a researcher. I then wondered about how to present it. I considered what might make it engaging – neat slides, video examples – and decided that I deliver the advice without aids as an example of how if the content of your talk is of interest to the audience, they will remain engaged even if they have nothing more interesting to look at than the speaker themselves. In order to tie it together (and to help me remember what I planned to say) I decided to present it in the format of twelve tips. If you are interested in reading them, they recently got posted on the school blog.

 

 

It was a lot of Hot Air compared to Quantum Computers

I was back at the RDS in Dublin visiting the BT Young Scientist and Technology Exhibition. Beginning in 1963, the exhibition concept was created by to UCD academics from the School of Physics. Fast forward to 30 years ago and I participated for the first of two visits. Arriving in again to visit thirty years later, I was struck by the  professional finish of posters. So much has improved, but I still love the hand made stands and eye catching props to lure you into a project. As you can see from the newspaper clipping, our project may have involved a lot of hot air but I recall there was some scientific rigour to our methodology!

I met 2019 winner of the BT Young Scientist and Technology Exhibition, Adam Kelly, while judging the national finals of SciFest 2018  where he also won first prize. As a judge I was struck by his demonstration of all the attributes of a quality scientist: imagination, methodology and a great ability to communicate the work. He knew what he had done and was able to explain what he had not done, and why. Adam’s project for SciFest was entitled ‘An Open Source Solution to Simulating Quantum Computers Using Hardware Acceleration’ and was the overall winner out of more than 10,000 students competed in the regional heats to progress to the national SciFest 2018 final.

Adam Kelly (Photo: Irish Times)

The event is an inspiring way to start the year: seeing the curiosity and scientific rigour on display by second level students who are motivated not by the prizes but by the desire to explore interesting questions.

No more 1 minute poster intros!

Alessandro and Andrew travelled to London for the 13th Digital Music Research Network One-day Workshop hosted at Queen Mary University of London. The event followed a format similar to recent ISMIR conferences where presenters at poster sessions each gave a 4 minute overview of their research prior to the main poster session so you could get a feel for the work and then go and talk to the presenters to find out more about their results. This format is a nice mix between regular 10+5 minute oral session presentation and questions or the rapid fire 1 minute ‘poster madness sessions’. Hopefully it will catch on at more conferences!

At the poster session Alessandro presented his work on “What happens to the musical works of the past?” As an SFI funded PhD student in the Insight Research centre at UCD, Alessandro is co-supervised by Emmanouil Benetos from Queen Mary University of London. He will be spending one year of his PhD based in London at the Alan Turing Institute.


Do Speech Assistants like Alexa have difficultly with Emotion?

The 26th Irish Conference on Artificial Intelligence and Cognitive Science was hosted by the Trinity College Dublin on the 6th and 7th of December in the Long Room Hub. The conference covered an interesting variety of topics across computer science, psychology and neuroscience. This led to interesting presentations including one asking the question whether Amazon’s Alexa suffers from Alexithymia?

 

QxLab was well represented with two papers: Rahul, who is funded by a scholarship from the SFI CONNECT research centre for future networks, presented his work on voice over IP quality monitoring with a presentation entitled: “The Sound of Silence: How Traditional and Deep Learning Based Voice Activity Detection Influences Speech Quality Monitoring”. In his paper he investigated how different voice activity detection strategies can influence the outputs from a speech quality prediction model.  In his experiment used a dataset with labelled subjective quality judgments.

Alessandro also had a paper in the proceeding and gave an interesting talk on exploring a DNN-based fusion for speech separation. Alessandro’s PhD is funded by the SFI Insight Centre for data analytics.

Andrew is now a Senior Member of the IEEE

In recognition of his contributions to the profession, Andrew has been elevated to Senior Member status in the IEEE. And he even got a nice wooden plaque to prove it!

The IEEE (Institute of Electrical and Electronics Engineers) is a professional association with over 430,000 members worldwide and is active in related industry standards, conferences, and publications.