Wednesday, February 8, 2023
HomeiOS Developmentios - Begin microphone utilizing App Intents "required situation is fake: IsFormatSampleRateAndChannelCountValid(format)"

ios – Begin microphone utilizing App Intents “required situation is fake: IsFormatSampleRateAndChannelCountValid(format)”


I wish to embrace Speech Recognition in my app, for this I used the Speech Framework from Apple together with App Intents Framework. The app intent “Hear” begins the speech recognition. Now I’ve the issue that I maintain getting the next error message:

*** Terminating app as a result of uncaught exception 'com.apple.coreaudio.avfaudio',      motive: 'required situation is fake: IsFormatSampleRateAndChannelCountValid(format)'
 terminating with uncaught exception of sort NSException

on the next line:

 self.audioEngine.inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
            self.recognitionRequest?.append(buffer)
    }

This drawback does not happen after I execute the identical code with a easy button click on, however ONYL with the App Intent “Hear”. I’m conscious, that there’s drawback with the microphone that Siri makes use of whereas listening to the App intent. However how can I remedy this challenge? I did plenty of analysis and likewise tried it with async capabilities, but it surely did not assist

My Code:

import Speech
import UIKit

class TestVoice: UIControl, SFSpeechRecognizerDelegate {    
let speechRecognizer        = SFSpeechRecognizer(locale: Locale(identifier: "en-US"))
   var recognitionRequest      : SFSpeechAudioBufferRecognitionRequest?
   var recognitionTask         : SFSpeechRecognitionTask?
   let audioEngine             = AVAudioEngine()


func stopRecording() {
    self.audioEngine.cease()
    self.recognitionRequest?.endAudio()
}

func setupSpeech() {
    
       self.speechRecognizer?.delegate = self
       SFSpeechRecognizer.requestAuthorization { (authStatus) in

           change authStatus {
               case .licensed:
                   print("sure")
               case .denied:
                   print("died")
               case .restricted:
                   print("died")
               case .notDetermined:
                   print("none")
           }
           OperationQueue.major.addOperation() {
           }
       }
   }


func startRecording() -> Bool {
        setupSpeech()
        clearSessionData()
        createAudioSession()
        recognitionRequest = bufferRecRequest()
        recognitionRequest?.shouldReportPartialResults = true
        self.recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest!, resultHandler: { (outcome, error) in
            
            var completed = false
            
            if let outcome = outcome {
                //do one thing
                completed = outcome.isFinal
            }
            
            if error != nil || completed {
                self.audioEngine.cease()
                self.audioEngine.inputNode.removeTap(onBus: 0)
                self.recognitionRequest = nil
                self.recognitionTask = nil
            }
        })
        
    
    let recordingFormat = self.audioEngine.inputNode.outputFormat(forBus: 0)
    self.audioEngine.inputNode.installTap(onBus: 0, bufferSize: 2048, format: recordingFormat) { (buffer, when) in
            self.recognitionRequest?.append(buffer)
    }
        
        self.audioEngine.put together()
        
        do {
            strive self.audioEngine.begin()
        } catch {
            print("audioEngine could not begin due to an error.")
            delegate?.showFeedbackError(title: "Sorry", message: "Your microphone is used some other place")
            return false
        }
    return true
    }

func clearSessionData(){
    if recognitionTask != nil {
        recognitionTask?.cancel()
        recognitionTask = nil
    }
}

func bufferRecRequest()->SFSpeechAudioBufferRecognitionRequest{
    self.recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
    guard let recognitionRequest = recognitionRequest else {
        fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
    }
    return recognitionRequest
}

func createAudioSession()-> Bool{
    
    do {
        strive AVAudioSession.sharedInstance().setCategory(AVAudioSession.Class.playAndRecord, choices: .mixWithOthers)
        
    } catch {
        print("audioSession properties weren't set due to an error.")
        delegate?.showFeedbackError(title: "Sorry", message: "Mic is busy")
        return false
    }
    return true
}
}

The App Intent

 import AppIntents
 import UIKit


 struct ListenIntent: AppIntent {
     static var openAppWhenRun: Bool = true

    @out there(iOS 16, *)
     static let title: LocalizedStringResource = "Hear"
     static var description =
        IntentDescription("Listens to the Consumer")
     let speechRecognizer = TestVoice()

     func carry out() throws -> some IntentResult & ProvidesDialog {
         speechRecognizer.startRecording()
         return .outcome(dialog: "Performed")
        
}}

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments