Show others how I hear myself












17














Sooo .. I've been thinking about this stuff. We all know that we sound different from what we hear of our own voice. It is easy to find out how others hear us by recording oneself and listen to it.



But what about the other way around?



Is there a way to transform our voice in a way that others can hear us as we perceive our own voice? I find it to be a quite interesting question. Sadly I couldn't find anything on the web after a couple google searches. Has nobody thought about this or is it impossible because of some reason that I'm not seeing?



Any leads on this would be appreciated :).










share|improve this question




















  • 5




    You could make the problem as easy as possible: Make a recording of your speech that, when listened by you through headphones, sounds the same as your speech sounds to you when you speak in an anechoic chamber. Not sure how to do that.
    – Olli Niemitalo
    Dec 11 '18 at 13:26








  • 2




    I just wanted to propose exactly that. However, is it really necessary to exclude the influence of the room? The directivity of your voice as a sound source is surely a factor, but I think this method will probably work quite well if the recording is done in the same place as where the "adjustment procedure" takes place.
    – applesoup
    Dec 11 '18 at 13:32
















17














Sooo .. I've been thinking about this stuff. We all know that we sound different from what we hear of our own voice. It is easy to find out how others hear us by recording oneself and listen to it.



But what about the other way around?



Is there a way to transform our voice in a way that others can hear us as we perceive our own voice? I find it to be a quite interesting question. Sadly I couldn't find anything on the web after a couple google searches. Has nobody thought about this or is it impossible because of some reason that I'm not seeing?



Any leads on this would be appreciated :).










share|improve this question




















  • 5




    You could make the problem as easy as possible: Make a recording of your speech that, when listened by you through headphones, sounds the same as your speech sounds to you when you speak in an anechoic chamber. Not sure how to do that.
    – Olli Niemitalo
    Dec 11 '18 at 13:26








  • 2




    I just wanted to propose exactly that. However, is it really necessary to exclude the influence of the room? The directivity of your voice as a sound source is surely a factor, but I think this method will probably work quite well if the recording is done in the same place as where the "adjustment procedure" takes place.
    – applesoup
    Dec 11 '18 at 13:32














17












17








17


2





Sooo .. I've been thinking about this stuff. We all know that we sound different from what we hear of our own voice. It is easy to find out how others hear us by recording oneself and listen to it.



But what about the other way around?



Is there a way to transform our voice in a way that others can hear us as we perceive our own voice? I find it to be a quite interesting question. Sadly I couldn't find anything on the web after a couple google searches. Has nobody thought about this or is it impossible because of some reason that I'm not seeing?



Any leads on this would be appreciated :).










share|improve this question















Sooo .. I've been thinking about this stuff. We all know that we sound different from what we hear of our own voice. It is easy to find out how others hear us by recording oneself and listen to it.



But what about the other way around?



Is there a way to transform our voice in a way that others can hear us as we perceive our own voice? I find it to be a quite interesting question. Sadly I couldn't find anything on the web after a couple google searches. Has nobody thought about this or is it impossible because of some reason that I'm not seeing?



Any leads on this would be appreciated :).







signal-analysis audio transform






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Dec 12 '18 at 15:28









Glorfindel

173119




173119










asked Dec 11 '18 at 13:00









Kevin Fiegenbaum

8815




8815








  • 5




    You could make the problem as easy as possible: Make a recording of your speech that, when listened by you through headphones, sounds the same as your speech sounds to you when you speak in an anechoic chamber. Not sure how to do that.
    – Olli Niemitalo
    Dec 11 '18 at 13:26








  • 2




    I just wanted to propose exactly that. However, is it really necessary to exclude the influence of the room? The directivity of your voice as a sound source is surely a factor, but I think this method will probably work quite well if the recording is done in the same place as where the "adjustment procedure" takes place.
    – applesoup
    Dec 11 '18 at 13:32














  • 5




    You could make the problem as easy as possible: Make a recording of your speech that, when listened by you through headphones, sounds the same as your speech sounds to you when you speak in an anechoic chamber. Not sure how to do that.
    – Olli Niemitalo
    Dec 11 '18 at 13:26








  • 2




    I just wanted to propose exactly that. However, is it really necessary to exclude the influence of the room? The directivity of your voice as a sound source is surely a factor, but I think this method will probably work quite well if the recording is done in the same place as where the "adjustment procedure" takes place.
    – applesoup
    Dec 11 '18 at 13:32








5




5




You could make the problem as easy as possible: Make a recording of your speech that, when listened by you through headphones, sounds the same as your speech sounds to you when you speak in an anechoic chamber. Not sure how to do that.
– Olli Niemitalo
Dec 11 '18 at 13:26






You could make the problem as easy as possible: Make a recording of your speech that, when listened by you through headphones, sounds the same as your speech sounds to you when you speak in an anechoic chamber. Not sure how to do that.
– Olli Niemitalo
Dec 11 '18 at 13:26






2




2




I just wanted to propose exactly that. However, is it really necessary to exclude the influence of the room? The directivity of your voice as a sound source is surely a factor, but I think this method will probably work quite well if the recording is done in the same place as where the "adjustment procedure" takes place.
– applesoup
Dec 11 '18 at 13:32




I just wanted to propose exactly that. However, is it really necessary to exclude the influence of the room? The directivity of your voice as a sound source is surely a factor, but I think this method will probably work quite well if the recording is done in the same place as where the "adjustment procedure" takes place.
– applesoup
Dec 11 '18 at 13:32










3 Answers
3






active

oldest

votes


















10














It is not impossible but it is not going to be a walk in the park too.



What you would be trying to do is to add to the voice signal, those vibrations that are delivered to the ear via the bones and are not accessible to anyone else.



But this is easier said than done in an accurate way.



Sound propagation through a medium depends very much on its density. Sound travels at ~1500m/s in water and with less dissipation than it travels in air (~340m/s). Bone is denser than air, therefore sound should travel faster through bone. This means that "your" sound begins to excite your ears first, followed by the sound that you perceive via the "normal" air channel. In reality, bone has an internal structure that might be affecting the way different frequencies pass through it but at the range of frequencies we are talking about, perhaps we can consider it as an equivalent solid. This can only be approximated because any attempt at measurement would have to be invasive but also because hearing is subjective.



Hearing, or the perception of sound is a HUGE contributor of difficulty here. The ear itself, the outer ear (the visible bit), the canal and the inner mechanism work together in very complicated ways. This is the subject of psychoacoustics. One example of this complex processing is phantom tones where the brain is filling in things that are supposed to be there. The brain itself may have already developed ways of isolating the self-generated signal that are inaccessible to us yet.



But, a simplistic (simplistic!) way to witness the differences between being the listener of your own sound and not is this:



Record a short and simple word (e.g. "Fishbone", a word that has both low frequencies (b,o,n) and high frequencies (F,sh,i,e)) with a bit of silence and loop it through an equaliser through your headphones. Start playback and synchronise your self uttering the word with the recording (so, something like "Fishbone...Fishbone...Fishbone..."). Now try to fiddle with the equaliser until what you hear and what you utter are reasonably similar.



At that point, the settings on the equaliser would represent the differences between the sound and what it is perceived through you and theoretically, any other speech passed through that equaliser would simulate how it arrives at your ears, as if you would have generated it with a source inside your body.



Hope this helps.






share|improve this answer

















  • 1




    it's probably impossible due to the individual differences of perception and impossibility of quantifying that subjectivity. Yet the differences could be minor, such as in the case of every produced 1000uF cap is actually slightly different...
    – Fat32
    Dec 11 '18 at 20:41








  • 1




    @Fat32 I could not decide on the impossibility because technically, it could be possible to quantify / measure the contribution of the second channel which is established through the bones and via reasonable assumptions come up with some approximation. Like what it feels like in a medical condition which is totally different for the "patient" perspective. That would be a better approximation than just EQ. But at the point of perception, yes, right now it would be impossible to suggest the definitive "filter" that would transform the sound clip as requested.
    – A_A
    Dec 11 '18 at 22:18










  • re-stated in another way: given the same exact phsyical stimulus is created at the cochleas of two distinct individuals, they will (probably) hearing two different perceptions and what they actually hear (afaik) is a self experience that's closed to any external inquisiton of any sort yet mathematical... That being said, humans can communicate acoustically is a result of the discrete nature of the language.
    – Fat32
    Dec 11 '18 at 23:09












  • Thanks a lot! This has been very informative and helpful and at the same time very dissappointing xD. I was afraid that every human bone structure alters the sound in a different manner.. but I didnt think about the ear itself as another disturbance. Well, at least their MIGHT exist a certain function for each individual human that translates the sound ~accordingly.
    – Kevin Fiegenbaum
    Dec 12 '18 at 11:48












  • @KevinFiegenbaum Thank you for letting me know. Perception is the source of lots of thinking. The brain couples to reality through the senses and creates and confirms (or rejects) models of what is probably happening. Optical illusions are cases where two "guesses" (models) fit the same explanation and the brain can't decide so it switches between them. All senses arrive at the brain already encoded and it is incredibly difficult to really know how they are experienced by the individual. The best we can do is a reasonable guess. All the best.
    – A_A
    Dec 12 '18 at 15:21



















11














The most practical attempt that I am aware of is by Won and Berger (2005). They simultaneously recorded vocalizations at the mouth with a microphone and on the skull with a homemade vibrometer. They then estimated the relevant transfer functions with linear predictive coding and cepstral smoothing.






share|improve this answer





























    0














    Before you get disappointed, let me suggest you to try another approach.



    As I see it, you have two very different parts: knowing the equalization to do (personalized to each person), and applying it to a particular signal (your voice).



    1st part: model of the internal human hearing system



    There are professionals who are working to collect data on that, standardize that process, and so on. Afaik, there are efforts to develop measures and graphs beyond the classic audiogram (which measures air and bone signals). Some of them are "listening tests" (more subjective, but interesting as well).



    Align to these professionals. If you follow their work, you just need their results. Let them do the heavy lifting. They know their part, which took them dozens of years of investigation. They are advancing in the knowledge you need.
    Which is: a sort of audiogram to measure how someone hears 'within'. I bet they are graphing that. And you just need that graph.



    2nd part: simulation



    I've done something similar to what you try to do. From the audiogram of any person, you can hear on your own like him/her. This is done with ffmpeg. You can check it out here: comomeoyes.com



    Basically, you record your voice, and an algorithm equalizes it with your personalized audiogram. This way, you can enter the audiogram of a person with hearing loss, and listen for yourself how he/she hears you.



    I understand you would like to do the same, but with a different audiogram, one that models how the internal hearing system equalizes the sound.



    I bet such kind of audiogram could already exist, and audiologists, medics, doctors in otorhinolaryngology, researchers and such may be discussing on the kind of accoustic tests to do to get the data they need to model a useful graph from the measurements.



    Good luck. Your attempt could help others.






    share|improve this answer





















      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "295"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdsp.stackexchange.com%2fquestions%2f54061%2fshow-others-how-i-hear-myself%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      10














      It is not impossible but it is not going to be a walk in the park too.



      What you would be trying to do is to add to the voice signal, those vibrations that are delivered to the ear via the bones and are not accessible to anyone else.



      But this is easier said than done in an accurate way.



      Sound propagation through a medium depends very much on its density. Sound travels at ~1500m/s in water and with less dissipation than it travels in air (~340m/s). Bone is denser than air, therefore sound should travel faster through bone. This means that "your" sound begins to excite your ears first, followed by the sound that you perceive via the "normal" air channel. In reality, bone has an internal structure that might be affecting the way different frequencies pass through it but at the range of frequencies we are talking about, perhaps we can consider it as an equivalent solid. This can only be approximated because any attempt at measurement would have to be invasive but also because hearing is subjective.



      Hearing, or the perception of sound is a HUGE contributor of difficulty here. The ear itself, the outer ear (the visible bit), the canal and the inner mechanism work together in very complicated ways. This is the subject of psychoacoustics. One example of this complex processing is phantom tones where the brain is filling in things that are supposed to be there. The brain itself may have already developed ways of isolating the self-generated signal that are inaccessible to us yet.



      But, a simplistic (simplistic!) way to witness the differences between being the listener of your own sound and not is this:



      Record a short and simple word (e.g. "Fishbone", a word that has both low frequencies (b,o,n) and high frequencies (F,sh,i,e)) with a bit of silence and loop it through an equaliser through your headphones. Start playback and synchronise your self uttering the word with the recording (so, something like "Fishbone...Fishbone...Fishbone..."). Now try to fiddle with the equaliser until what you hear and what you utter are reasonably similar.



      At that point, the settings on the equaliser would represent the differences between the sound and what it is perceived through you and theoretically, any other speech passed through that equaliser would simulate how it arrives at your ears, as if you would have generated it with a source inside your body.



      Hope this helps.






      share|improve this answer

















      • 1




        it's probably impossible due to the individual differences of perception and impossibility of quantifying that subjectivity. Yet the differences could be minor, such as in the case of every produced 1000uF cap is actually slightly different...
        – Fat32
        Dec 11 '18 at 20:41








      • 1




        @Fat32 I could not decide on the impossibility because technically, it could be possible to quantify / measure the contribution of the second channel which is established through the bones and via reasonable assumptions come up with some approximation. Like what it feels like in a medical condition which is totally different for the "patient" perspective. That would be a better approximation than just EQ. But at the point of perception, yes, right now it would be impossible to suggest the definitive "filter" that would transform the sound clip as requested.
        – A_A
        Dec 11 '18 at 22:18










      • re-stated in another way: given the same exact phsyical stimulus is created at the cochleas of two distinct individuals, they will (probably) hearing two different perceptions and what they actually hear (afaik) is a self experience that's closed to any external inquisiton of any sort yet mathematical... That being said, humans can communicate acoustically is a result of the discrete nature of the language.
        – Fat32
        Dec 11 '18 at 23:09












      • Thanks a lot! This has been very informative and helpful and at the same time very dissappointing xD. I was afraid that every human bone structure alters the sound in a different manner.. but I didnt think about the ear itself as another disturbance. Well, at least their MIGHT exist a certain function for each individual human that translates the sound ~accordingly.
        – Kevin Fiegenbaum
        Dec 12 '18 at 11:48












      • @KevinFiegenbaum Thank you for letting me know. Perception is the source of lots of thinking. The brain couples to reality through the senses and creates and confirms (or rejects) models of what is probably happening. Optical illusions are cases where two "guesses" (models) fit the same explanation and the brain can't decide so it switches between them. All senses arrive at the brain already encoded and it is incredibly difficult to really know how they are experienced by the individual. The best we can do is a reasonable guess. All the best.
        – A_A
        Dec 12 '18 at 15:21
















      10














      It is not impossible but it is not going to be a walk in the park too.



      What you would be trying to do is to add to the voice signal, those vibrations that are delivered to the ear via the bones and are not accessible to anyone else.



      But this is easier said than done in an accurate way.



      Sound propagation through a medium depends very much on its density. Sound travels at ~1500m/s in water and with less dissipation than it travels in air (~340m/s). Bone is denser than air, therefore sound should travel faster through bone. This means that "your" sound begins to excite your ears first, followed by the sound that you perceive via the "normal" air channel. In reality, bone has an internal structure that might be affecting the way different frequencies pass through it but at the range of frequencies we are talking about, perhaps we can consider it as an equivalent solid. This can only be approximated because any attempt at measurement would have to be invasive but also because hearing is subjective.



      Hearing, or the perception of sound is a HUGE contributor of difficulty here. The ear itself, the outer ear (the visible bit), the canal and the inner mechanism work together in very complicated ways. This is the subject of psychoacoustics. One example of this complex processing is phantom tones where the brain is filling in things that are supposed to be there. The brain itself may have already developed ways of isolating the self-generated signal that are inaccessible to us yet.



      But, a simplistic (simplistic!) way to witness the differences between being the listener of your own sound and not is this:



      Record a short and simple word (e.g. "Fishbone", a word that has both low frequencies (b,o,n) and high frequencies (F,sh,i,e)) with a bit of silence and loop it through an equaliser through your headphones. Start playback and synchronise your self uttering the word with the recording (so, something like "Fishbone...Fishbone...Fishbone..."). Now try to fiddle with the equaliser until what you hear and what you utter are reasonably similar.



      At that point, the settings on the equaliser would represent the differences between the sound and what it is perceived through you and theoretically, any other speech passed through that equaliser would simulate how it arrives at your ears, as if you would have generated it with a source inside your body.



      Hope this helps.






      share|improve this answer

















      • 1




        it's probably impossible due to the individual differences of perception and impossibility of quantifying that subjectivity. Yet the differences could be minor, such as in the case of every produced 1000uF cap is actually slightly different...
        – Fat32
        Dec 11 '18 at 20:41








      • 1




        @Fat32 I could not decide on the impossibility because technically, it could be possible to quantify / measure the contribution of the second channel which is established through the bones and via reasonable assumptions come up with some approximation. Like what it feels like in a medical condition which is totally different for the "patient" perspective. That would be a better approximation than just EQ. But at the point of perception, yes, right now it would be impossible to suggest the definitive "filter" that would transform the sound clip as requested.
        – A_A
        Dec 11 '18 at 22:18










      • re-stated in another way: given the same exact phsyical stimulus is created at the cochleas of two distinct individuals, they will (probably) hearing two different perceptions and what they actually hear (afaik) is a self experience that's closed to any external inquisiton of any sort yet mathematical... That being said, humans can communicate acoustically is a result of the discrete nature of the language.
        – Fat32
        Dec 11 '18 at 23:09












      • Thanks a lot! This has been very informative and helpful and at the same time very dissappointing xD. I was afraid that every human bone structure alters the sound in a different manner.. but I didnt think about the ear itself as another disturbance. Well, at least their MIGHT exist a certain function for each individual human that translates the sound ~accordingly.
        – Kevin Fiegenbaum
        Dec 12 '18 at 11:48












      • @KevinFiegenbaum Thank you for letting me know. Perception is the source of lots of thinking. The brain couples to reality through the senses and creates and confirms (or rejects) models of what is probably happening. Optical illusions are cases where two "guesses" (models) fit the same explanation and the brain can't decide so it switches between them. All senses arrive at the brain already encoded and it is incredibly difficult to really know how they are experienced by the individual. The best we can do is a reasonable guess. All the best.
        – A_A
        Dec 12 '18 at 15:21














      10












      10








      10






      It is not impossible but it is not going to be a walk in the park too.



      What you would be trying to do is to add to the voice signal, those vibrations that are delivered to the ear via the bones and are not accessible to anyone else.



      But this is easier said than done in an accurate way.



      Sound propagation through a medium depends very much on its density. Sound travels at ~1500m/s in water and with less dissipation than it travels in air (~340m/s). Bone is denser than air, therefore sound should travel faster through bone. This means that "your" sound begins to excite your ears first, followed by the sound that you perceive via the "normal" air channel. In reality, bone has an internal structure that might be affecting the way different frequencies pass through it but at the range of frequencies we are talking about, perhaps we can consider it as an equivalent solid. This can only be approximated because any attempt at measurement would have to be invasive but also because hearing is subjective.



      Hearing, or the perception of sound is a HUGE contributor of difficulty here. The ear itself, the outer ear (the visible bit), the canal and the inner mechanism work together in very complicated ways. This is the subject of psychoacoustics. One example of this complex processing is phantom tones where the brain is filling in things that are supposed to be there. The brain itself may have already developed ways of isolating the self-generated signal that are inaccessible to us yet.



      But, a simplistic (simplistic!) way to witness the differences between being the listener of your own sound and not is this:



      Record a short and simple word (e.g. "Fishbone", a word that has both low frequencies (b,o,n) and high frequencies (F,sh,i,e)) with a bit of silence and loop it through an equaliser through your headphones. Start playback and synchronise your self uttering the word with the recording (so, something like "Fishbone...Fishbone...Fishbone..."). Now try to fiddle with the equaliser until what you hear and what you utter are reasonably similar.



      At that point, the settings on the equaliser would represent the differences between the sound and what it is perceived through you and theoretically, any other speech passed through that equaliser would simulate how it arrives at your ears, as if you would have generated it with a source inside your body.



      Hope this helps.






      share|improve this answer












      It is not impossible but it is not going to be a walk in the park too.



      What you would be trying to do is to add to the voice signal, those vibrations that are delivered to the ear via the bones and are not accessible to anyone else.



      But this is easier said than done in an accurate way.



      Sound propagation through a medium depends very much on its density. Sound travels at ~1500m/s in water and with less dissipation than it travels in air (~340m/s). Bone is denser than air, therefore sound should travel faster through bone. This means that "your" sound begins to excite your ears first, followed by the sound that you perceive via the "normal" air channel. In reality, bone has an internal structure that might be affecting the way different frequencies pass through it but at the range of frequencies we are talking about, perhaps we can consider it as an equivalent solid. This can only be approximated because any attempt at measurement would have to be invasive but also because hearing is subjective.



      Hearing, or the perception of sound is a HUGE contributor of difficulty here. The ear itself, the outer ear (the visible bit), the canal and the inner mechanism work together in very complicated ways. This is the subject of psychoacoustics. One example of this complex processing is phantom tones where the brain is filling in things that are supposed to be there. The brain itself may have already developed ways of isolating the self-generated signal that are inaccessible to us yet.



      But, a simplistic (simplistic!) way to witness the differences between being the listener of your own sound and not is this:



      Record a short and simple word (e.g. "Fishbone", a word that has both low frequencies (b,o,n) and high frequencies (F,sh,i,e)) with a bit of silence and loop it through an equaliser through your headphones. Start playback and synchronise your self uttering the word with the recording (so, something like "Fishbone...Fishbone...Fishbone..."). Now try to fiddle with the equaliser until what you hear and what you utter are reasonably similar.



      At that point, the settings on the equaliser would represent the differences between the sound and what it is perceived through you and theoretically, any other speech passed through that equaliser would simulate how it arrives at your ears, as if you would have generated it with a source inside your body.



      Hope this helps.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Dec 11 '18 at 13:47









      A_A

      7,27931731




      7,27931731








      • 1




        it's probably impossible due to the individual differences of perception and impossibility of quantifying that subjectivity. Yet the differences could be minor, such as in the case of every produced 1000uF cap is actually slightly different...
        – Fat32
        Dec 11 '18 at 20:41








      • 1




        @Fat32 I could not decide on the impossibility because technically, it could be possible to quantify / measure the contribution of the second channel which is established through the bones and via reasonable assumptions come up with some approximation. Like what it feels like in a medical condition which is totally different for the "patient" perspective. That would be a better approximation than just EQ. But at the point of perception, yes, right now it would be impossible to suggest the definitive "filter" that would transform the sound clip as requested.
        – A_A
        Dec 11 '18 at 22:18










      • re-stated in another way: given the same exact phsyical stimulus is created at the cochleas of two distinct individuals, they will (probably) hearing two different perceptions and what they actually hear (afaik) is a self experience that's closed to any external inquisiton of any sort yet mathematical... That being said, humans can communicate acoustically is a result of the discrete nature of the language.
        – Fat32
        Dec 11 '18 at 23:09












      • Thanks a lot! This has been very informative and helpful and at the same time very dissappointing xD. I was afraid that every human bone structure alters the sound in a different manner.. but I didnt think about the ear itself as another disturbance. Well, at least their MIGHT exist a certain function for each individual human that translates the sound ~accordingly.
        – Kevin Fiegenbaum
        Dec 12 '18 at 11:48












      • @KevinFiegenbaum Thank you for letting me know. Perception is the source of lots of thinking. The brain couples to reality through the senses and creates and confirms (or rejects) models of what is probably happening. Optical illusions are cases where two "guesses" (models) fit the same explanation and the brain can't decide so it switches between them. All senses arrive at the brain already encoded and it is incredibly difficult to really know how they are experienced by the individual. The best we can do is a reasonable guess. All the best.
        – A_A
        Dec 12 '18 at 15:21














      • 1




        it's probably impossible due to the individual differences of perception and impossibility of quantifying that subjectivity. Yet the differences could be minor, such as in the case of every produced 1000uF cap is actually slightly different...
        – Fat32
        Dec 11 '18 at 20:41








      • 1




        @Fat32 I could not decide on the impossibility because technically, it could be possible to quantify / measure the contribution of the second channel which is established through the bones and via reasonable assumptions come up with some approximation. Like what it feels like in a medical condition which is totally different for the "patient" perspective. That would be a better approximation than just EQ. But at the point of perception, yes, right now it would be impossible to suggest the definitive "filter" that would transform the sound clip as requested.
        – A_A
        Dec 11 '18 at 22:18










      • re-stated in another way: given the same exact phsyical stimulus is created at the cochleas of two distinct individuals, they will (probably) hearing two different perceptions and what they actually hear (afaik) is a self experience that's closed to any external inquisiton of any sort yet mathematical... That being said, humans can communicate acoustically is a result of the discrete nature of the language.
        – Fat32
        Dec 11 '18 at 23:09












      • Thanks a lot! This has been very informative and helpful and at the same time very dissappointing xD. I was afraid that every human bone structure alters the sound in a different manner.. but I didnt think about the ear itself as another disturbance. Well, at least their MIGHT exist a certain function for each individual human that translates the sound ~accordingly.
        – Kevin Fiegenbaum
        Dec 12 '18 at 11:48












      • @KevinFiegenbaum Thank you for letting me know. Perception is the source of lots of thinking. The brain couples to reality through the senses and creates and confirms (or rejects) models of what is probably happening. Optical illusions are cases where two "guesses" (models) fit the same explanation and the brain can't decide so it switches between them. All senses arrive at the brain already encoded and it is incredibly difficult to really know how they are experienced by the individual. The best we can do is a reasonable guess. All the best.
        – A_A
        Dec 12 '18 at 15:21








      1




      1




      it's probably impossible due to the individual differences of perception and impossibility of quantifying that subjectivity. Yet the differences could be minor, such as in the case of every produced 1000uF cap is actually slightly different...
      – Fat32
      Dec 11 '18 at 20:41






      it's probably impossible due to the individual differences of perception and impossibility of quantifying that subjectivity. Yet the differences could be minor, such as in the case of every produced 1000uF cap is actually slightly different...
      – Fat32
      Dec 11 '18 at 20:41






      1




      1




      @Fat32 I could not decide on the impossibility because technically, it could be possible to quantify / measure the contribution of the second channel which is established through the bones and via reasonable assumptions come up with some approximation. Like what it feels like in a medical condition which is totally different for the "patient" perspective. That would be a better approximation than just EQ. But at the point of perception, yes, right now it would be impossible to suggest the definitive "filter" that would transform the sound clip as requested.
      – A_A
      Dec 11 '18 at 22:18




      @Fat32 I could not decide on the impossibility because technically, it could be possible to quantify / measure the contribution of the second channel which is established through the bones and via reasonable assumptions come up with some approximation. Like what it feels like in a medical condition which is totally different for the "patient" perspective. That would be a better approximation than just EQ. But at the point of perception, yes, right now it would be impossible to suggest the definitive "filter" that would transform the sound clip as requested.
      – A_A
      Dec 11 '18 at 22:18












      re-stated in another way: given the same exact phsyical stimulus is created at the cochleas of two distinct individuals, they will (probably) hearing two different perceptions and what they actually hear (afaik) is a self experience that's closed to any external inquisiton of any sort yet mathematical... That being said, humans can communicate acoustically is a result of the discrete nature of the language.
      – Fat32
      Dec 11 '18 at 23:09






      re-stated in another way: given the same exact phsyical stimulus is created at the cochleas of two distinct individuals, they will (probably) hearing two different perceptions and what they actually hear (afaik) is a self experience that's closed to any external inquisiton of any sort yet mathematical... That being said, humans can communicate acoustically is a result of the discrete nature of the language.
      – Fat32
      Dec 11 '18 at 23:09














      Thanks a lot! This has been very informative and helpful and at the same time very dissappointing xD. I was afraid that every human bone structure alters the sound in a different manner.. but I didnt think about the ear itself as another disturbance. Well, at least their MIGHT exist a certain function for each individual human that translates the sound ~accordingly.
      – Kevin Fiegenbaum
      Dec 12 '18 at 11:48






      Thanks a lot! This has been very informative and helpful and at the same time very dissappointing xD. I was afraid that every human bone structure alters the sound in a different manner.. but I didnt think about the ear itself as another disturbance. Well, at least their MIGHT exist a certain function for each individual human that translates the sound ~accordingly.
      – Kevin Fiegenbaum
      Dec 12 '18 at 11:48














      @KevinFiegenbaum Thank you for letting me know. Perception is the source of lots of thinking. The brain couples to reality through the senses and creates and confirms (or rejects) models of what is probably happening. Optical illusions are cases where two "guesses" (models) fit the same explanation and the brain can't decide so it switches between them. All senses arrive at the brain already encoded and it is incredibly difficult to really know how they are experienced by the individual. The best we can do is a reasonable guess. All the best.
      – A_A
      Dec 12 '18 at 15:21




      @KevinFiegenbaum Thank you for letting me know. Perception is the source of lots of thinking. The brain couples to reality through the senses and creates and confirms (or rejects) models of what is probably happening. Optical illusions are cases where two "guesses" (models) fit the same explanation and the brain can't decide so it switches between them. All senses arrive at the brain already encoded and it is incredibly difficult to really know how they are experienced by the individual. The best we can do is a reasonable guess. All the best.
      – A_A
      Dec 12 '18 at 15:21











      11














      The most practical attempt that I am aware of is by Won and Berger (2005). They simultaneously recorded vocalizations at the mouth with a microphone and on the skull with a homemade vibrometer. They then estimated the relevant transfer functions with linear predictive coding and cepstral smoothing.






      share|improve this answer


























        11














        The most practical attempt that I am aware of is by Won and Berger (2005). They simultaneously recorded vocalizations at the mouth with a microphone and on the skull with a homemade vibrometer. They then estimated the relevant transfer functions with linear predictive coding and cepstral smoothing.






        share|improve this answer
























          11












          11








          11






          The most practical attempt that I am aware of is by Won and Berger (2005). They simultaneously recorded vocalizations at the mouth with a microphone and on the skull with a homemade vibrometer. They then estimated the relevant transfer functions with linear predictive coding and cepstral smoothing.






          share|improve this answer












          The most practical attempt that I am aware of is by Won and Berger (2005). They simultaneously recorded vocalizations at the mouth with a microphone and on the skull with a homemade vibrometer. They then estimated the relevant transfer functions with linear predictive coding and cepstral smoothing.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Dec 11 '18 at 17:05









          StrongBad

          2313




          2313























              0














              Before you get disappointed, let me suggest you to try another approach.



              As I see it, you have two very different parts: knowing the equalization to do (personalized to each person), and applying it to a particular signal (your voice).



              1st part: model of the internal human hearing system



              There are professionals who are working to collect data on that, standardize that process, and so on. Afaik, there are efforts to develop measures and graphs beyond the classic audiogram (which measures air and bone signals). Some of them are "listening tests" (more subjective, but interesting as well).



              Align to these professionals. If you follow their work, you just need their results. Let them do the heavy lifting. They know their part, which took them dozens of years of investigation. They are advancing in the knowledge you need.
              Which is: a sort of audiogram to measure how someone hears 'within'. I bet they are graphing that. And you just need that graph.



              2nd part: simulation



              I've done something similar to what you try to do. From the audiogram of any person, you can hear on your own like him/her. This is done with ffmpeg. You can check it out here: comomeoyes.com



              Basically, you record your voice, and an algorithm equalizes it with your personalized audiogram. This way, you can enter the audiogram of a person with hearing loss, and listen for yourself how he/she hears you.



              I understand you would like to do the same, but with a different audiogram, one that models how the internal hearing system equalizes the sound.



              I bet such kind of audiogram could already exist, and audiologists, medics, doctors in otorhinolaryngology, researchers and such may be discussing on the kind of accoustic tests to do to get the data they need to model a useful graph from the measurements.



              Good luck. Your attempt could help others.






              share|improve this answer


























                0














                Before you get disappointed, let me suggest you to try another approach.



                As I see it, you have two very different parts: knowing the equalization to do (personalized to each person), and applying it to a particular signal (your voice).



                1st part: model of the internal human hearing system



                There are professionals who are working to collect data on that, standardize that process, and so on. Afaik, there are efforts to develop measures and graphs beyond the classic audiogram (which measures air and bone signals). Some of them are "listening tests" (more subjective, but interesting as well).



                Align to these professionals. If you follow their work, you just need their results. Let them do the heavy lifting. They know their part, which took them dozens of years of investigation. They are advancing in the knowledge you need.
                Which is: a sort of audiogram to measure how someone hears 'within'. I bet they are graphing that. And you just need that graph.



                2nd part: simulation



                I've done something similar to what you try to do. From the audiogram of any person, you can hear on your own like him/her. This is done with ffmpeg. You can check it out here: comomeoyes.com



                Basically, you record your voice, and an algorithm equalizes it with your personalized audiogram. This way, you can enter the audiogram of a person with hearing loss, and listen for yourself how he/she hears you.



                I understand you would like to do the same, but with a different audiogram, one that models how the internal hearing system equalizes the sound.



                I bet such kind of audiogram could already exist, and audiologists, medics, doctors in otorhinolaryngology, researchers and such may be discussing on the kind of accoustic tests to do to get the data they need to model a useful graph from the measurements.



                Good luck. Your attempt could help others.






                share|improve this answer
























                  0












                  0








                  0






                  Before you get disappointed, let me suggest you to try another approach.



                  As I see it, you have two very different parts: knowing the equalization to do (personalized to each person), and applying it to a particular signal (your voice).



                  1st part: model of the internal human hearing system



                  There are professionals who are working to collect data on that, standardize that process, and so on. Afaik, there are efforts to develop measures and graphs beyond the classic audiogram (which measures air and bone signals). Some of them are "listening tests" (more subjective, but interesting as well).



                  Align to these professionals. If you follow their work, you just need their results. Let them do the heavy lifting. They know their part, which took them dozens of years of investigation. They are advancing in the knowledge you need.
                  Which is: a sort of audiogram to measure how someone hears 'within'. I bet they are graphing that. And you just need that graph.



                  2nd part: simulation



                  I've done something similar to what you try to do. From the audiogram of any person, you can hear on your own like him/her. This is done with ffmpeg. You can check it out here: comomeoyes.com



                  Basically, you record your voice, and an algorithm equalizes it with your personalized audiogram. This way, you can enter the audiogram of a person with hearing loss, and listen for yourself how he/she hears you.



                  I understand you would like to do the same, but with a different audiogram, one that models how the internal hearing system equalizes the sound.



                  I bet such kind of audiogram could already exist, and audiologists, medics, doctors in otorhinolaryngology, researchers and such may be discussing on the kind of accoustic tests to do to get the data they need to model a useful graph from the measurements.



                  Good luck. Your attempt could help others.






                  share|improve this answer












                  Before you get disappointed, let me suggest you to try another approach.



                  As I see it, you have two very different parts: knowing the equalization to do (personalized to each person), and applying it to a particular signal (your voice).



                  1st part: model of the internal human hearing system



                  There are professionals who are working to collect data on that, standardize that process, and so on. Afaik, there are efforts to develop measures and graphs beyond the classic audiogram (which measures air and bone signals). Some of them are "listening tests" (more subjective, but interesting as well).



                  Align to these professionals. If you follow their work, you just need their results. Let them do the heavy lifting. They know their part, which took them dozens of years of investigation. They are advancing in the knowledge you need.
                  Which is: a sort of audiogram to measure how someone hears 'within'. I bet they are graphing that. And you just need that graph.



                  2nd part: simulation



                  I've done something similar to what you try to do. From the audiogram of any person, you can hear on your own like him/her. This is done with ffmpeg. You can check it out here: comomeoyes.com



                  Basically, you record your voice, and an algorithm equalizes it with your personalized audiogram. This way, you can enter the audiogram of a person with hearing loss, and listen for yourself how he/she hears you.



                  I understand you would like to do the same, but with a different audiogram, one that models how the internal hearing system equalizes the sound.



                  I bet such kind of audiogram could already exist, and audiologists, medics, doctors in otorhinolaryngology, researchers and such may be discussing on the kind of accoustic tests to do to get the data they need to model a useful graph from the measurements.



                  Good luck. Your attempt could help others.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Dec 12 '18 at 17:33









                  Giuseppe

                  1012




                  1012






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Signal Processing Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.





                      Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                      Please pay close attention to the following guidance:


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdsp.stackexchange.com%2fquestions%2f54061%2fshow-others-how-i-hear-myself%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Plaza Victoria

                      Puebla de Zaragoza

                      Musa