Generating Similar Words (or Synonyms) with Word Embeddings (Word2Vec)












0












$begingroup$


We have a search engine, and when users type in Tacos, we also want to search for similar words, such as Chilis or Burritos.



However, it is also possible that the user search with multiple keywords. Such as Tacos Mexican Restaurants, and we also want to find similar word such as Chilis or Burritos.



What we do is to add all the vectors together for each word. This sometimes works, but with more keywords the vectors tend to be in a place where there are no neighbors.



Is there an approach where we can use not only one word, but multiple word, and still gives us similar results? We are using pre-trained glove vectors from Stanford, would it help if we train on articles that are food related, and specifically use that type of word embeddings for this task?










share|improve this question









$endgroup$

















    0












    $begingroup$


    We have a search engine, and when users type in Tacos, we also want to search for similar words, such as Chilis or Burritos.



    However, it is also possible that the user search with multiple keywords. Such as Tacos Mexican Restaurants, and we also want to find similar word such as Chilis or Burritos.



    What we do is to add all the vectors together for each word. This sometimes works, but with more keywords the vectors tend to be in a place where there are no neighbors.



    Is there an approach where we can use not only one word, but multiple word, and still gives us similar results? We are using pre-trained glove vectors from Stanford, would it help if we train on articles that are food related, and specifically use that type of word embeddings for this task?










    share|improve this question









    $endgroup$















      0












      0








      0





      $begingroup$


      We have a search engine, and when users type in Tacos, we also want to search for similar words, such as Chilis or Burritos.



      However, it is also possible that the user search with multiple keywords. Such as Tacos Mexican Restaurants, and we also want to find similar word such as Chilis or Burritos.



      What we do is to add all the vectors together for each word. This sometimes works, but with more keywords the vectors tend to be in a place where there are no neighbors.



      Is there an approach where we can use not only one word, but multiple word, and still gives us similar results? We are using pre-trained glove vectors from Stanford, would it help if we train on articles that are food related, and specifically use that type of word embeddings for this task?










      share|improve this question









      $endgroup$




      We have a search engine, and when users type in Tacos, we also want to search for similar words, such as Chilis or Burritos.



      However, it is also possible that the user search with multiple keywords. Such as Tacos Mexican Restaurants, and we also want to find similar word such as Chilis or Burritos.



      What we do is to add all the vectors together for each word. This sometimes works, but with more keywords the vectors tend to be in a place where there are no neighbors.



      Is there an approach where we can use not only one word, but multiple word, and still gives us similar results? We are using pre-trained glove vectors from Stanford, would it help if we train on articles that are food related, and specifically use that type of word embeddings for this task?







      word2vec word-embeddings






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked yesterday









      user1157751user1157751

      2201416




      2201416






















          1 Answer
          1






          active

          oldest

          votes


















          2












          $begingroup$

          Word Mover’s Distance (WMD) is an algorithm for finding the minimum distance between multiple embedded words.




          The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to "travel" to reach the embedded words of another document.




          For example:



          enter image description here
          Source: "From Word Embeddings To Document Distances" Paper



          In your problem, it will allow you to find that "Tacos Mexican Restaurants" is similar to "Burritos Taqueria" even though they share no common string literals.






          share|improve this answer









          $endgroup$














            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "557"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48722%2fgenerating-similar-words-or-synonyms-with-word-embeddings-word2vec%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2












            $begingroup$

            Word Mover’s Distance (WMD) is an algorithm for finding the minimum distance between multiple embedded words.




            The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to "travel" to reach the embedded words of another document.




            For example:



            enter image description here
            Source: "From Word Embeddings To Document Distances" Paper



            In your problem, it will allow you to find that "Tacos Mexican Restaurants" is similar to "Burritos Taqueria" even though they share no common string literals.






            share|improve this answer









            $endgroup$


















              2












              $begingroup$

              Word Mover’s Distance (WMD) is an algorithm for finding the minimum distance between multiple embedded words.




              The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to "travel" to reach the embedded words of another document.




              For example:



              enter image description here
              Source: "From Word Embeddings To Document Distances" Paper



              In your problem, it will allow you to find that "Tacos Mexican Restaurants" is similar to "Burritos Taqueria" even though they share no common string literals.






              share|improve this answer









              $endgroup$
















                2












                2








                2





                $begingroup$

                Word Mover’s Distance (WMD) is an algorithm for finding the minimum distance between multiple embedded words.




                The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to "travel" to reach the embedded words of another document.




                For example:



                enter image description here
                Source: "From Word Embeddings To Document Distances" Paper



                In your problem, it will allow you to find that "Tacos Mexican Restaurants" is similar to "Burritos Taqueria" even though they share no common string literals.






                share|improve this answer









                $endgroup$



                Word Mover’s Distance (WMD) is an algorithm for finding the minimum distance between multiple embedded words.




                The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to "travel" to reach the embedded words of another document.




                For example:



                enter image description here
                Source: "From Word Embeddings To Document Distances" Paper



                In your problem, it will allow you to find that "Tacos Mexican Restaurants" is similar to "Burritos Taqueria" even though they share no common string literals.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered 7 hours ago









                Brian SpieringBrian Spiering

                4,2181129




                4,2181129






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48722%2fgenerating-similar-words-or-synonyms-with-word-embeddings-word2vec%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    How to label and detect the document text images

                    Vallis Paradisi

                    Tabula Rosettana