Windows Server Posted February 5 Posted February 5 About Us: Our AI-powered technology provides real-time speech-speech translation (Simultaneous Interpretation) across 47 languages. We need help in achieving the following:User Language Selection – Each participant should be able to choose their preferred language for both speaking and listening in real time. (We assume this can be achieved using your SDKs and integrated into Teams through the Marketplace app – please confirm).Audio Stream Access – We need to access each participant’s audio stream and send their selected “From” and “To” language parameters to our API.Audio Replacement – Instead of the default Teams audio, we require programmatic access to the default audio OUTPUT to replace it with our AI-translated audio stream. This ensures a seamless experience for each participant. Interpretation functionality (currently designed for human interpreters) is already present in Teams currently. I came across the following API on the community portal, which appears to describe how to access independent audio streams: Connecting to a Teams Meeting via SDK and Read Raw Audio Streams. Is this available and will it fit the first part of our requirement mentioned above? Additionally, we need a method to play back the translated audio for each participant individually. Since multilingual Teams calls may involve 20–30 spoken languages, a combined audio stream/output won’t work. We require independent input and output audio streams to be processed separately within the call for each participant. We do not want to use bots or embed Teams within an application wrapper.Since similar functionality already exists for human interpreters, we are looking to enable the same for AI-powered translation. Need help with this at the earliest. Thanks so much!ArunView the full article Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.