Optimized the semantic consistency scoring process by caching synset lookups on a per-token basis. Previously, the meaning_scorer re-queried WordNet synsets for every pairwise comparison, leading to redundant overhead. By shifting to a synset_map, we significantly reduce computation time during translation analysis. Meaning scores are slow?

Boost performance by caching per-token WordNet synsets - Braumeister-Stefan/lina_database_decoder