![]() New application features that necessitate changes to the database’s schema often need both the synchronization logic and the search engine schema to also be updated at the same time. It is not uncommon to find that 10% of engineering cycles are lost to manually recovering synchronization failures. This becomes especially complex if the search index falls so far behind the database that it has to be resynced from scratch, causing potential application downtime. ![]() Monitoring the replication process is necessary to identify and remediate synchronization issues. It is important that replication to the search engine keeps pace with database writes so that search results do not excessively lag the database and break application SLAs. Once the synchronization mechanism has been deployed, it needs to be monitored and managed, adding more engineering overhead. The synchronization mechanism also has to be deployed onto its own nodes, creating additional hardware sprawl. Whether building or buying, the process takes time and adds ongoing costs. Typically they will create a data pipeline with custom filtering and transformation logic built on top of messaging systems such as Apache Kafka, or using packaged connectors from specialized providers. This means engineering teams need to create a synchronization mechanism that replicates data from the database to the search engine. To surface relevant and up-to-date search results, the database and search engine need to be kept synchronized, duplicating data between systems. How search works with a bolt-on solution: All of this translates to reduced developer velocity, compromised customer experience, and escalating costs. The application stack gets more complex and unwieldy. While users get the rich search experience they expect, this comes at a significant cost. This provides the search features demanded by customers, but it does so while imposing additional constraints on developers and ops teams while driving up data duplication and technology sprawl.Ī bolt-on specialized search engine alongside your database mandates synchronizing data between the two systems. If a database’s internal search features are not adequate to satisfy the desired user experience, then another option is to bolt on a dedicated search engine, such as Elasticsearch, alongside the database. suggest?suggest=true&suggest.q=name&suggest.Architecture Elasticsearch: A Bolt On approach Now you can query the target dictionary dynamically. Instead you should replicate the suggester definition for each language you may have so that Solr can build one dictionary index per field/language (just name the suggesters according to the target field language) : Or you would have to delete/rebuild that index each time a suggest query requires a dictionary other than the one previously built. The value for field should remain static because it is parsed once to build a dictionary index from that field. ![]() ![]() ![]() It is not the answer you may expect but I started a comment and I ended up with this.īy using a dynamic field here you would have to rebuild the suggester at each query, I suggest ) you require a specific suggestComponent' dictionary on query. So far I have standard Suggester with field specified: īut actually I need either to use name_* or preferably at runtime to pass the field name for example: suggest.field=name_pl I have dynamic field name_* that has concrete fields: name_pl, name_de and name_en (can be more, I want to have flexibility here) and I would like to search for suggestions depending on language: for pl I want to get suggestions in name_pl, for en in name_en and so on. Is it possible to have dynamic field or pass field for suggestions at runtime (in query for example) for SuggestComponent?ĭepending on user's language I would like to suggest him different things. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |