Jump to content
 

Docbeamz

Verified Members
  • Content Count

    14
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Docbeamz

  • Rank
    Settler

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi Dario, We eliminated the MediaExtrator problem by eliminating OpenSL ES which we used for mp3 decompressing. - It calls MediaExtractor. We now use minimp3 for decompression. Thanks for the help
  2. Hi Dario, Our Engine programmer made some progress last night but we’re still singing the Android blues. He was able to intermittently verify that mediaextractor is referenced somewhere in the engine code. He will isolate the offending code tonight so we just have to hang in there a bit longer. Here’s what he logged: Here is current results: It took some time to run beamz sample app on my Huawei Honor 8 (android build environment was updated since last builds) but looks like I saw creation of MediaExtractor on launching of app. But it is not always creates. In device logs I see lines like: 07-23 15:04:26.947 1579-1737/? W/MediaExtractor: creating media extractor in calling process 07-23 15:04:26.955 1579-1737/? I/MediaExtractor: extractor created in uid: 10028 (u0_a28) It appears in two cases: Android device startup First launch of beamz sample app after device reboot. Second, third and so on launches of beamz app are not produces logs with creation of MediaExtractor. But when I reboot device and run beamz sample app at first time then I always saw these logs. Maybe there is some system cache of MediaExtractor object and new instance of beamz app could reuse MediaExtractor from previous app instance? I am going to cut pieces of code to determine which code will produce logs with MediaExtractor. First I will try with clean app (without beamz library), next I will cut all OpenSL ES initialization and finally will cut AudioManager object. Hope this will be enough to found why system creates MediaExtractor. It appears that your initial hunch was right. There IS some problem with MediaExtractor on the Focus device. Although we do not use this directly, apparently, some included libs do – coding workarounds may not be possible. Can HTC fix this? Thx Doc Beamz
  3. Hi Dario, Well, we're still stalled on this problem. We're experiencing this issue with our Android music engine. The previous programmer observations I mentioned came from our UI team. Our engine team spent a couple hours investigating and weighed in on this. This is what they sent to me today: -------------------- Hi, Doc. Here is my current thoughts: BeamzEngine is not using MediaExtractor. We decoding MP3 in native C++ code by using minimp3 library. minimp3 is provided in source code and it is platform independent, so I don't think that it uses MediaExtractor (at least I don't see it in minimp3 sources). To work with mp3 files we using simple fopen/fclose/mmap/munmap functions, they are definitelly exists on any Android system. So I think problem in another place, mp3 decoding is not the reason of this problem. Java part of BeamzEngine uses android.media.AudioManager to query system audio params. Here is its usage: // "_context" here is activity, passed to BeamzEngineFactory.createEngine or BeamzEngineFactory.createEngine2 Object audManObj = _context.getSystemService(Context.AUDIO_SERVICE); AudioManager audMan = (AudioManager) audManObj; String fpb = audMan.getProperty(AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER); String sr = audMan.getProperty(AudioManager.PROPERTY_OUTPUT_SAMPLE_RATE); Another possible place: for mp3 recording we are using OpenSL ES. Maybe it is uses MediaExtractor in its implementation. But you are saying that changing of song files from mp3 to wav fixes the problem, so it is doent't looks like what file recording or quering of system audio params is the reason. But maybe I am wrong. -------------- To recap, Our Android engine runs perfectly on other Android systems. This issue only shows up on the Focus Android device, so it's something unique to the Focus. Our engine programmers will continue to debug this but, we will also be relying on your assistance. Thanks Doc
  4. Hi Dario, Thanks for the reply. This is from our programmer: no, that's not the issue it's not using assets or raw resources but I suppose they can try loading the files differently Thanks
  5. Hi , Thanks for the fast response. Here's the full error: E/WVMExtractor: Failed to open libwvm.so: dlopen failed: library "libwvm.so" not found. Developer : …………….. "did some digging and found that the issue was with the container format." It involves .mp3 decompression. If they convert the .mp3's into .wav's the problem doesn't show up.
  6. Hi, Our developers have informed us of an issue with the version of Android used by the Focus that has stalled our project. The Vive Focus (Android) is missing a library, libwvm.so, which is used by our software. We're investigating a workaround. Can you explain why this is missing from Focus Android, if it will be added, and any suggestions you would have for fixing this ASAP. Thx Doc
  7. Which audio format is best for 3D audio positioning – Mono or Stereo? Stereo (panning) is a form of audio imaging (between L & R channels) Is there any perceived conflict from using a stereo imaged sound file with 3D sound imaging processes? Thx
  8. Hi Dario, I heard from our programmers today and, although they have not benchmarked yet, they're certain the methods you described will work for streaming audio. There are some small concerns about response latency, but we're confident we can tweak buffer sizes enough to get acceptable latency. If we get into a road block on that, I'll reach out as a new thread. In the meantime, I will set this thread to resolved. Thanks for helping us. Doc Beamz Interactive.
  9. Hi Dario, Our programmers checked out the script reference and they think it will work. However, they're swamped right now and will not be able to test this out for a few days. I'll get back to you with feedback as soon as I hear from them. Thanks Doc
  10. Hi Dario, Sorry for delay. Basically we need a way to play audio from a buffer. We would fill the buffer with data from the audio engine stream, which Unity should then apply the spacial effect to and play back. This would be similar to a buffer queue in OpenSL. Do you know of any way we can do this? Thx Doc
  11. Hi Dario, Thanks for the speedy response. Regarding #1: Using an audio clip as a sound source will not work for our interactive music application (Jam Studio VR). All audio produced is streamed to the UI by our Beamz engine in response to dynamic trigger activity. We examined the audio clip option and it cannot work without substantial architectural changes to our music engine. Here's why. Audio (music) played by our app must be dynamic because it will change according to the current musical environment. As songs are played, our Beamz Interactive Music engine tracks the current musical environment and determines which samples (audio clips) to play at any given moment should a (user) trigger be broken. For example, when a song changes keys, different samples will be played (when triggered) that are sympathetic musically with the current key of the song. When the song changes key again, the samples that play will change. Also, When a trigger is held broken, a series of compatible samples is streamed until the trigger is released. The Beamz engine manages all of this. Ideally, it would be perfect if sound emitting objects could accept an audio stream. Just as if someone wanted to put an internet radio somewhere in the VR environment - or maybe a TV. If you can, I would suggest checking out Jam Studio VR – it will all be clear when you see how it works. Thanks again Doc
  12. There appear to be performance issues with the Augmented Reality implementation of the Pro. The frame rate is really bad, the positioning is wrong (everything seems shifted down, and nothing matches up to the real world position, so you couldn't, say, reach out and pick something up), and the cameras are very blurry and grainy. It's possible that some of this is due to unfinished software, so my specific questions are: 1. How do we fix the position offset? Is this due to us having an outdated version of Steam VR or something like that? 2. Please elaborate on the framerate issues. It seems that with the Vive Pro, framerate is overall lower than with the regular Vive. Adding the camera seems to lower it further, and the camera seems to drop out for short periods of time... it's currently hard to tell if the entire app is pausing, or just the camera. 3. What framerate is the camera recording/displaying at? 4. Are there things we can detect in the environment, such as floor, walls, etc., to create more immersive AR experiences? (We could use this, for example, to put our video screens on the wall of the actual room) My current setup is an i7 pc with a GTX 980 TI graphics card.
  13. Is there any way to use streaming audio as the source for 3D Spacial Audio? If we want to achieve a more subtle, non-aggressive audio pan/volume experience, would we have the ability to control that? For instance, as opposed to having a "hard" left pan as you turn your head toward the right and away from the audio source, have the pan just "lean" left but remain present throughout the 360º spectrum at a lesser degree? Or a significant volume change by moving only a few feet away from an audio source, can that be controlled to remain more subtle?
×
×
  • Create New...