For versions 9 and older, it was well documented how character and other "long" data are stored in an index based on the declared size of the data (full vs. compressed vs. partial) - here's a almost palaeolithic description:)
For version 10 and above, I have not found an according statement in the current documentation.
Does this somehow depend on the values of the INLINE/PREFIX clauses (introduced with v10) of the according columns that specify how much portion of "longer" character or blob data will be stored within a data page vs. an extension page?
In particular: Under which circumstances will the database engine use compressed or hashed values of the according data in contrast to full values?
(Aside: This question has come up in context with that other one.)
I believe you will find the basics of this are covered in the '10.0.0 New features' article under the "Main features" article as
which clearly identifies that we now use a newer compressed B-Tree exclusively and entire key values are present in compressed form. Along with other benefits, this new implmentation eliminated the limitations and side-effects from the earlier hash-based [correction: implementation]; making the need for Full-Compares for key matching lookups moot.
Later on in 11.0 there were additional indexing features that benefitted from this 10.0.x starting point; Index-only retrieval and even more compression. And again in 12.0 a further performance refinements around clustering of sequential values . . .
All of which should be mostly transparent to the schema's design, unlike the issues with the older indexing technologies which could require schema modification (index design) to address implementation specific aspects; and thus the need for articles like the one that Breck Carter penned.
You will note the my links are to the 12.0.1 DocCommentXchange doc set. The version 16.0 one no longer has the V10.0.0/.1 "What's new in ..." articles and have replace those with a link to 'older version'.