Simple text search reduction
const fullstem =// [ 'hotel', 'straß', 'sig' ]// [ 'アプリ', '作' ]// [ '喜欢', '海边' ]// [ 'добр' ]// [ 'أحب', 'أن', 'أكون', 'جانب', 'بحر' ]// [ 'μ', 'αρεσ', 'ειμα', 'διπλ', 'παραλ' ]// [ 'szeret', 'tengerpar' ]// [ 'pia', 'sta', 'ma' ]// [ 'sahil', 'yan', 'olma', 'sever' ]// [ '나는', '해변', '옆에', '있고', '싶어' ]// [ 'நான்', 'கடலோர', 'பக்கம்', 'இர்', 'விரும்பு' ]
- Chinese (simplified)
Fullstem works by splitting out individual character scripts and then processing each potental language for each match.
This can produce unexpected (but not unintended) behaviour in certain circumstances.
I can't get enough ministrone soup!
[ "soup", "minestro" ]
So you can see how multi-language sentences can be searched in either language, but that may lead to invalid search hits where a word occurs in two languages.
Now consider Japanese kanji and Chinese han.
If we use the Japanese sentence
国際の空港に飛行機を見た roughly "I saw an aeroplane at the international airport",
would give the result
["国","際","空","港","飛行","機","見"] which is correct for a Japanese search but might yield interesting results for Mandarin searches. The exact cross-over depends on the context but ambiguity arises from the fact that many Japanese words are borrowed from Chinese but have archaic spellings.
The stemmer is a work in progress. Current behaviour is to pull out any hangul words and strip stopwords.
If you have any suggestions for improvement please get in touch or submit a merge-request. I do not know many of the languages used here so cannot know for sure their effectiveness.
This software would not be possible without any of the fantastic work done in it's dependencies. Please check them out on the npm dependencies page.