test test test test

this code is from an angular app, it's a service that records audio.

it works fine when streaming the record, but most of the time it returns an empty record file when it goes through the timeoutWith branch of the last function, which is meant to be used as a fallback in case streaming failed because of a network connectivity problem.

why do you think this code has an issue with the fallback branch?

import { Injectable } from '@angular/core'
import * as RecordRTC from 'recordrtc'
import { Observable, Subject, Subscription, of } from 'rxjs'
import * as moment from 'moment'
import { resample } from 'wave-resampler'
import { Socket } from 'ngx-socket-io'
import { debounceTime, filter, map, take, timeoutWith } from 'rxjs/operators'
import { UtilityService } from './utility.service'
import { TranslateService } from '@ngx-translate/core'
import { ActiveUserService } from './active-user.service'
const alawmulaw = require('alawmulaw')

interface RecordedAudioOutput {
	blob: Blob
	title: string
}

@Injectable({
	providedIn: 'root',
})
export class RecorderService {
	private stream: MediaStream
	private recorder
	private usingMediaRecorder: boolean = true
	private interval
	private startTime
	// private recordedSubject = new Subject<RecordedAudioOutput>()
	private recordedFileNameUrl = new Subject<any>()
	private recordingTimeSubject = new Subject<{
		formatted: string
		seconds: number
	}>()
	private recordingFailedSubject = new Subject<string>()
	private scriptProcessor: ScriptProcessorNode
	private audioWorklet: AudioWorkletNode
	private contextSampleRate = 48000
	readonly BACKEND_SAMPLE_RATE = 24000
	private recordId: number = 0

	private streamedAudioFileIgnored: boolean = false
	private urlSub: Subscription = null

	readonly MAX_RECORD_TIME: number = 5 * 60 //in seconds
	private audioData: Uint8Array = new Uint8Array(
		this.MAX_RECORD_TIME * 1.5 * this.BACKEND_SAMPLE_RATE
	)
	private audioDataIndex = 0


	private internalBufferLength:number = 1024 * 8
	private int16ConvertArray:Int16Array 

	constructor(
		private socket: Socket,
		private utilityService: UtilityService,
		private translate: TranslateService,
		private activeUser: ActiveUserService
	) {
		try {
			const context = new AudioContext()
			const sampleRate = context.sampleRate
			console.log('sample rate is', sampleRate)
			this.contextSampleRate = sampleRate
		} catch (e) {
			console.log(
				'error finding browser sample rate, falling back to default rate'
			)
		}
			this.int16ConvertArray = new Int16Array(this.internalBufferLength*this.BACKEND_SAMPLE_RATE/this.contextSampleRate)

		this.socket
		.fromEvent('disconnect')
		.pipe(debounceTime(2000))
		.subscribe((ev) => {
			console.log('soc dcccccccccccccccccccc')
			if (this.recorder && !this.streamedAudioFileIgnored) {
				this.utilityService.presentToast(
					this.translate.instant('POST_QUESTION.recordingInterrupted'),
					[],
					'warning'
				)
				this.streamedAudioFileIgnored = true
			}
		})
	}

	// getRecordedBlob(): Observable<RecordedAudioOutput> {
	// 	return this.recordedSubject.asObservable()
	// }
	getFileNameUrl(): Observable<any> {
		return this.recordedFileNameUrl.asObservable()
	}
	getRecordedTime(): Observable<{ formatted: string; seconds: number }> {
		return this.recordingTimeSubject.asObservable()
	}
	sendRecordStream(msg, v5: boolean = false) {
	
		if (v5) {
			this.socket.emit('messagev5', msg)
		} else {
			this.socket.emit('messagev4', msg)
		}
	}
	initializeStream() {
		this.audioData.fill(0)
		this.audioDataIndex = 0
		this.streamedAudioFileIgnored = false
		this.recordId = parseInt(''+(new Date().getTime()) + this.utilityService.randomId(0, 10000))
		if (this.activeUser.isActive) {
			this.socket.emit('startGoogleCloudStreamv5', {
				recordId: this.recordId,
				userId: this.activeUser.user._id,
			})
		} else {
			this.socket.emit('startGoogleCloudStreamv5', { recordId: this.recordId })
		}

		console.log('streamingStarted')


	}
	endStream() {
		this.subscribeForStreamedAudioFileUrl(this.audioData.subarray(0, this.audioDataIndex))
		this.socket.emit(
			'endGoogleCloudStreamv5',
			this.usingMediaRecorder ? 'MR0.7.3' : ''
		)
		console.log('end emitt')
		if (this.stream) {
			// stop the browser microphone
			const tracks = this.stream.getTracks()

			tracks.forEach((track) => {
				track.stop()
			})
			this.stream = null
		}
		if (this.scriptProcessor) {
			// Stop listening the stream from the michrophone
			this.scriptProcessor.removeEventListener(
				'audioprocess',
				this.streamAudioData
			)
		}
		if (this.audioWorklet) {
			this.audioWorklet.port.onmessage = undefined
		}
	}
	streamAudioData = (e?: AudioProcessingEvent, data?: Uint8Array) => {
		// HERE GOES THE CODE TO SEND THE CHUNKED DATA FROM STREAM
		// if (!resamp && e.i8) {
		// 	const mergedArray = new Uint8Array(this.audioData.length + e.i8.length)
		// 	mergedArray.set(this.audioData)
		// 	mergedArray.set(e.i8, this.audioData.length)
		// 	this.audioData = mergedArray

		// 	return this.sendRecordStream(e.i8.buffer, true)
		// }

		// var offlineCtx = new OfflineAudioContext(
		// 	1,
		// 	e.inputBuffer.duration * this.BACKEND_SAMPLE_RATE,
		// 	this.BACKEND_SAMPLE_RATE
		// )

		// var offlineSource = offlineCtx.createBufferSource()
		// offlineSource.buffer = e.inputBuffer
		// offlineSource.connect(offlineCtx.destination)
		// offlineSource.start()
		// offlineCtx
		// 	.startRendering()
		// 	.then((resampled) => {
		// 		// `resampled` contains an AudioBuffer resampled at 16000Hz.
		// 		// use resampled.getChannelData(x) to get an Float32Array for channel x.
		// 		const aa = resampled.getChannelData(0)
		// 		const a16 = this.convertFloat32ToInt16(aa)
		// 		const a8 = alawmulaw.mulaw.encode(a16)
		// 		const mergedArray = new Uint8Array(this.audioData.length + a8.length)
		// 		mergedArray.set(this.audioData)
		// 		mergedArray.set(a8, this.audioData.length)
		// 		this.audioData = mergedArray

		// 		this.sendRecordStream(a8.buffer)
		// 	})
		// 	.catch((e) => {
		// 		console.log('error resampling', e)

		// 		let floatSamples = e.inputBuffer.getChannelData(0)
		// 		const aa = this.downsample(
		// 			floatSamples,
		// 			this.contextSampleRate,
		// 			this.BACKEND_SAMPLE_RATE
		// 		)
		// 		const a16 = this.convertFloat32ToInt16(aa)
		// 		const a8 = alawmulaw.mulaw.encode(a16)
		// 		this.audioData = new Uint8Array([...this.audioData, ...a8])
		// 		const mergedArray = new Uint8Array(this.audioData.length + a8.length)
		// 		mergedArray.set(this.audioData)
		// 		mergedArray.set(a8, this.audioData.length)
		// 		this.audioData = mergedArray

		// 		this.sendRecordStream(a8.buffer)
		// 	})
		if (e?.inputBuffer) {
			const aa = resample(
				e.inputBuffer.getChannelData(0),
				this.contextSampleRate,
				this.BACKEND_SAMPLE_RATE,
				{ method: 'linear' }
			)

			const a16 = this.convertFloat32ToInt16(aa)
			const a8 = alawmulaw.mulaw.encode(a16)
			this.audioData.set(a8, this.audioDataIndex)
			this.audioDataIndex += a8.length

			this.sendRecordStream(a8.buffer)
		} else if (data.length) {
			const aa = resample(
				data,
				this.contextSampleRate,
				this.BACKEND_SAMPLE_RATE,
				{ method: 'linear' }
			)

			const a16 = this.convertFloat32ToInt16(aa)
			const a8 = alawmulaw.mulaw.encode(a16)
			this.audioData.set(a8, this.audioDataIndex)
			this.audioDataIndex += a8.length

			this.sendRecordStream(a8.buffer, true)
		}
	}
	downsample(buffer, fromSampleRate, toSampleRate) {
		// buffer is a Float32Array
		var sampleRateRatio = Math.round(fromSampleRate / toSampleRate)
		var newLength = Math.round(buffer.length / sampleRateRatio)

		var result = new Float32Array(newLength)
		var offsetResult = 0
		var offsetBuffer = 0
		while (offsetResult < result.length) {
			var nextOffsetBuffer = Math.round((offsetResult + 1) * sampleRateRatio)
			var accum = 0,
				count = 0
			for (
				var i = offsetBuffer;
				i < nextOffsetBuffer && i < buffer.length;
				i++
			) {
				accum += buffer[i]
				count++
			}
			result[offsetResult] = accum / count
			offsetResult++
			offsetBuffer = nextOffsetBuffer
		}
		return result
	}
	convertFloat32ToInt16(buffer) {
		let l = buffer.length
		while (l--) {
			this.int16ConvertArray[l] = buffer[l] * 0x7fff
		}
		return this.int16ConvertArray
	}
	getRecordFinalText() {
		return this.socket
			.fromEvent('speechData')
			.pipe(filter((data: any) => data.isFinal))
			.pipe(map((data: any) => data.text))
	}
	getRecordPreviewText() {
		return this.socket
			.fromEvent('speechData')
			.pipe(filter((data: any) => !data.isFinal))
			.pipe(map((data: any) => data.text))
	}
	recordingFailed(): Observable<string> {
		return this.recordingFailedSubject.asObservable()
	}
	async initializeAudioContext() {
		if (this.utilityService.otherRecordInstanceActive) {
			this.utilityService.presentToast(
				this.translate.instant('POST_QUESTION.anotherRecordInProgress'),
				[],
				'warning'
			)
			return false
		}
		this.utilityService.otherRecordInstanceActive = true

		this.initializeStream()
		window.AudioContext = window.AudioContext
		return this.startRecording()
	}
	async startRecordingStream(s: MediaStream) {
		let AudioContext = window.AudioContext || window['webkitAudioContext']
		let audioContext: AudioContext = new AudioContext()
		if (!audioContext) {
			return
		}
		// AudioNode used to control the overall gain (or volume) of the audio graph

		//const inputPoint = audioContext.createGain();
		//const microphone = audioContext.createMediaStreamSource(s);
		//const analyser = audioContext.createAnalyser();
		try {
			//throw new Error('audio worklet disabled')

			await audioContext.audioWorklet.addModule(
				'./assets/worklet/worklet-processor.js'
			)
			this.audioWorklet = new AudioWorkletNode(
				audioContext,
				'worklet-processor',
				{
					channelCount: 1,
					processorOptions: {
						bufferSize: this.internalBufferLength,
						inputSampleRate: this.contextSampleRate,
						outputSampleRate: this.BACKEND_SAMPLE_RATE,
					},
				}
			)
			let input = audioContext.createMediaStreamSource(s)
			input.connect(this.audioWorklet)
			this.audioWorklet.port.onmessage = (m) => {
				this.streamAudioData(null, m.data)
			}
			console.log('worklet')
		} catch (e) {
			console.log('where my audioWorker', e)

			this.scriptProcessor = audioContext.createScriptProcessor(this.internalBufferLength, 1, 1)
			this.scriptProcessor.connect(audioContext.destination)
			console.log('conx', audioContext.sampleRate)
			let input = audioContext.createMediaStreamSource(s)
			input.connect(this.scriptProcessor)

			// microphone.connect(inputPoint);
			// inputPoint.connect(analyser);
			// inputPoint.connect(this.scriptProcessor);

			// This is for registering to the “data” event of audio stream, without overwriting the default scriptProcessor.onAudioProcess function if there is one.
			this.scriptProcessor.addEventListener(
				'audioprocess',
				this.streamAudioData
			)
		}
	}
	async startRecording() {
		if (this.recorder) {
			// It means recording is already started or it is already recording something
			return false
		}
		this.recordingTimeSubject.next({ formatted: '00:00', seconds: 0 })
		try {
			const s = await navigator.mediaDevices.getUserMedia({
				//  audio: true,
				audio: {
					echoCancellation: true,
					sampleRate: this.contextSampleRate,
				},
			})
			this.stream = s
			console.log('stram', s)
			this.record()
			this.startRecordingStream(this.stream)
			return true
		} catch (error) {
			console.log('error recording', error)
			this.recordingFailedSubject.next('permission-error')
			this.utilityService.otherRecordInstanceActive = false
			return false
		}
	}

	private record() {
		try {
			if (MediaRecorder.isTypeSupported('audio/webm;codecs="opus"')) {
				this.recorder = new MediaRecorder(this.stream, {
					mimeType: 'audio/webm;codecs="opus"',
				})
				this.usingMediaRecorder = true
			} else if (MediaRecorder.isTypeSupported('audio/mp4')) {
				this.recorder = new MediaRecorder(this.stream, {
					mimeType: 'audio/mp4',
				})
				this.usingMediaRecorder = true
			} else {
				throw new Error('no mediarecord support type found')
			}

			// this.recorder.addEventListener('dataavailable', (e)=>{
			//   let blob=e.data
			//   if (this.startTime) {
			//     const mp3Name = encodeURIComponent(
			//       "audio_" + new Date().getTime() + ".mp3"
			//     );
			//     this.stopMedia();
			//     this.recordedSubject.next({ blob, title: mp3Name });
			//   }

			// });
			this.recorder.start()
		} catch (e) {
			console.log('error using mediarecorder', e)
			this.usingMediaRecorder = false
			this.recorder = new RecordRTC.StereoAudioRecorder(this.stream, {
				type: 'audio',
				mimeType: 'audio/wav',
				numberOfAudioChannels: 1,
				// bitsPerSecond: 128000,
				// audioBitsPerSecond: 128000,
				// sampleRate: 96000,
				desiredSampRate: this.contextSampleRate,
			})
			this.recorder.record()
		}

		this.startTime = moment()
		this.interval = setInterval(() => {
			const currentTime = moment()
			const diffTime = moment.duration(currentTime.diff(this.startTime))
			const time =
				this.toString(diffTime.minutes()) +
				':' +
				this.toString(diffTime.seconds())
			this.recordingTimeSubject.next({
				formatted: time,
				seconds: diffTime.asSeconds(),
			})
		}, 100)
	}
	private toString(value) {
		let val = value
		if (!value) {
			val = '00'
		}
		if (value < 10) {
			val = '0' + value
		}
		return val
	}
	stopRecording(kill: boolean = false) {
		console.log('record is', this.recorder, this.usingMediaRecorder)
		if (this.recorder && !this.usingMediaRecorder) {
			this.recorder.stop(
				(blob) => {
					if (this.startTime) {
						const mp3Name = encodeURIComponent(
							'audio_' + new Date().getTime() + '.mp3'
						)
						this.stopMedia()
						// this.recordedSubject.next({ blob, title: mp3Name })
					}
				},
				() => {
					this.stopMedia()
					this.recordingFailedSubject.next()
				}
			)
			this.endStream()
		} else if (this.recorder) {
			this.recorder.stop()
			this.stopMedia()
			this.endStream()
		}
		if (kill) {
			console.log('killing record id')
			this.recordId = 0
			this.utilityService.otherRecordInstanceActive = false
			this.audioData.fill(0)
			this.audioDataIndex = 0
		}
	}

	private stopMedia() {
		if (this.recorder) {
			this.recorder = null
			clearInterval(this.interval)
			this.startTime = null
			if (this.stream) {
				this.stream.getTracks().forEach((track) => track.stop())
				this.stream = null
			}
		}
	}

	subscribeForStreamedAudioFileUrl(buffer:Uint8Array) {
		this.urlSub = this.socket
			.fromEvent('recordUrl')
			.pipe(
				filter((data: any) => {
					console.log('data', data, this.recordId)
					return !this.streamedAudioFileIgnored&&data && data.id == this.recordId
				})
			)
			.pipe(
				timeoutWith(
					10000,
					of({
						timeout: true,
						buffer,
					})
				)
			)
			.pipe(take(1))
			.subscribe((data: any) => {
				console.log('got url', data)
				// if(data.isFinal){
				//   this.stopRecording()
				// }

				this.recordedFileNameUrl.next(data)
				this.utilityService.otherRecordInstanceActive = false
			})
		return
	}
}

سؤال من Mahmoud Qurashy

تم النشر السبت، ١٩ أغسطس ٢٠٢٣

2 إجابة

هل تأكدت من تأثير باقي الملفات في المشروع على قيم المتغيرات في الكود؟ . . . . . .. . . . . . . .. . . . .. . .. . . . .. . . . . . .. . .. . . .. . . . . . . .. . .. . . .. . . . .

تم النشر الأربعاء، ٢١ أغسطس ٢٠٢٤


It looks like you are using the timeoutWith operator as a fallback to handle cases when streaming fails, typically due to connectivity issues. If you're receiving an empty record file, here are a few things to check: Buffer Handling : Make sure the buffer you pass in timeoutWith actually contains data. If the buffer is initialized or cleared just before the timeoutWith fallback is triggered, it might be empty. Timeout Duration : Check if the timeout value (10 seconds in your code) is too short for your use case. If network delays are common, consider increasing this timeout duration. Buffer Data : When the buffer is passed due to a timeout, ensure you’re recording and saving the buffer data correctly. Double-check how buffer is constructed and used. Socket Event Handling : Make sure the 'recordUrl' event emits the expected data before the timeout triggers. If there's a delay in receiving this event, verify the socket communication process and improve its reliability. Check for Errors : Log errors or exceptions that might occur just before the fallback to identify potential issues that might cause premature triggering of the timeoutWith . Audio Data Index : Verify the audioDataIndex value to ensure it's correctly managing the audio data size. Start by adding more logging statements at various points in your subscribeForStreamedAudioFileUrl method to gain insights into the data flow and pinpoint where the issue starts. (يرجى ملاحظة أن هذه إجابة آلية بواسطة الذكاء الصناعي ولا تغني عن زيارة الطبيب في الأمور الصحية)

تم النشر الأحد، ١١ أغسطس ٢٠٢٤

2 تعليق

تم النشر الأحد، ٧ يوليو ٢٠٢٤


www.google.com

تم النشر الاثنين، ٢٧ نوفمبر ٢٠٢٣

لعرض السؤال في فدني اضغط هنا

عندك مشكلة؟ محتاج استشارة؟ فدني مجتمع يساعدك في حل مشاكلك ويجيب عن أسئلتك