How to enable LO to get hardware acceleration from AMD Integrated Radeon graphics on Linux?

I would like to find a really thorough, in-depth guide to getting GPU acceleration working with LO. I’d like to think that the LO team have someone working on this and there would be at least one really detailed and authoritative explanation of GPU usage and UI rendering, so we could all have reliable answers instead of a lot of old guesswork by users trying to work out why something so basic is not “just working”.
There are so many out of date or incomplete posts out there and none that really explain how LO does graphics handling.

I am using LO 24.8.4.2. on OpenSUSE Tumbleweed. I’m running on a 2024 Slimbook Excalibur (recommended!) with a 16-thread AMD Ryzen 7840H and integrated Radeon 380 graphics card. LO reports using Default UI rendering with VCL and kf6 (cairo + xcb). I am sure the graphics card driver etc is running and up to date.
Glxinfo | render gives the following:

Blockquote
miles@SlimMiles:~> glxinfo |grep render
direct rendering: Yes
GLX_MESA_copy_sub_buffer, GLX_MESA_gl_interop, GLX_MESA_query_renderer,
GLX_MESA_copy_sub_buffer, GLX_MESA_gl_interop, GLX_MESA_query_renderer,
Extended renderer info (GLX_MESA_query_renderer):
OpenGL renderer string: AMD Radeon 780M (radeonsi, phoenix, LLVM 19.1.7, DRM 3.60, 6.13.4-1-default)
GL_ARB_compute_variable_group_size, GL_ARB_conditional_render_inverted,
GL_IBM_multimode_draw_arrays, GL_INTEL_blackhole_render,
GL_NV_compute_shader_derivatives, GL_NV_conditional_render,
GL_ARB_compute_variable_group_size, GL_ARB_conditional_render_inverted,
GL_INTEL_blackhole_render, GL_KHR_blend_equation_advanced,
GL_NV_compute_shader_derivatives, GL_NV_conditional_render,
GL_EXT_read_format_bgra, GL_EXT_render_snorm, GL_EXT_robustness,
GL_INTEL_blackhole_render, GL_KHR_blend_equation_advanced,
GL_NV_conditional_render, GL_NV_draw_buffers, GL_NV_fbo_color_attachments,
GL_OES_element_index_uint, GL_OES_fbo_render_mipmap,

Blockquote

I’m running 2 screens at reasonably high resolution - integrated at 2800*1600 and external 4k on an HDMI connection. The integrated screen is specced to refresh at 165Hz - so clearly the GPU can push a LOT of pixels.

This should be a really fast machine and indeed, I hardly ever get the CPU usage up above a few percent. BUT, LO does not make use of the Graphics card to do hardware acceleration - if I monitor GPU activity, there is none regardless of how much screen activity in LO I cause. . The LO interface is usable, but scrolling spreadsheets is painful, rather slow, jumpy, and very inferior to the smooth and instantaneously-responsive scrolling that I see on a 9-year old Macbook Pro running a rather old Excel.

I have tried to find answers to improving this situation, but without getting any useful results.
Tools/Options Use Hardware Acceleration is ON (obviously).
Turning Anti-aliasing OFF did help, scrolling speed is better but I’m still not using my nice hardware.

I have tried one commandline option which did appear to run LO with a different graphics stack that did activate the GPU, but it resulted in all menus being in tiny font and all the icons ugly so something really not usable. Sorry but I don’t remember exactly what it was.

Can someone more experienced with the underlying code give me a pointer to the best available documentation, so I can test some more? Even better, does anyone else have the same CPU/GPU and has a combination of options that gets LO to benefit fully from hardware accelerated graphics?

thanks,
Miles

yep. unfortunately noise has been the norm :face_with_thermometer:

probably there : Development/Mailing List - The Document Foundation Wiki

and/or How to Report Bugs in LibreOffice - The Document Foundation Wiki