.Make sure compatibility with multiple platforms, including.NET 6.0,. Internet Framework 4.6.2, and.NET Criterion 2.0 as well as above.Reduce dependencies to prevent version disagreements and also the requirement for tiing redirects.Recording Sound Information.Some of the main capabilities of the SDK is audio transcription. Designers may transcribe audio files asynchronously or even in real-time. Below is an example of how to transcribe an audio documents:.making use of AssemblyAI.making use of AssemblyAI.Transcripts.var client = new AssemblyAIClient(" YOUR_API_KEY").var transcript = wait for client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For regional files, identical code could be utilized to obtain transcription.wait for using var stream = brand-new FileStream("./ nbc.mp3", FileMode.Open).var records = wait for client.Transcripts.TranscribeAsync(.stream,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK likewise holds real-time audio transcription making use of Streaming Speech-to-Text. This attribute is actually especially valuable for uses requiring quick handling of audio records.utilizing AssemblyAI.Realtime.await utilizing var transcriber = brand new RealtimeTranscriber( brand new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Partial: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Final: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for acquiring audio from a microphone for instance.GetAudio( async (chunk) => await transcriber.SendAudioAsync( piece)).await transcriber.CloseAsync().Making Use Of LeMUR for LLM Apps.The SDK includes with LeMUR to allow creators to construct large foreign language version (LLM) functions on vocal data. Listed here is an example:.var lemurTaskParams = brand new LemurTaskParams.Cue="Offer a quick conclusion of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var action = wait for client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Intellect Designs.Additionally, the SDK features built-in assistance for audio knowledge versions, permitting conviction evaluation and also other enhanced components.var transcript = wait for client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = true. ).foreach (var result in transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// POSITIVE, NEUTRAL, or even NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").To find out more, visit the main AssemblyAI blog.Image source: Shutterstock.