Over the course of 2016, HACT has been working with seven housing providers to develop a new, co-created model for gathering data and understanding resident feedback that responds to the needs of the sector, addresses the challenges present in current methods, and improves the evidence base about the methods used to build and analyse satisfaction scores. The project commenced in December 2015 with seven partners: BPHA, Catalyst, Equity, Family Mosaic, North Hertfordshire Homes, One Manchester and Trafford Housing Trust.
Essentially, the aim of this project is to: produce a model that provides actionable business insights that will enable housing providers to make business decisions, deliver quality services and respond to resident feedback.
A key focus for this project is understanding the drivers and consequences of satisfaction and developing a model that can be used to inform business decisions. To enable us to understand what this means for the organisations involved, HACT met with the Senior and/or Executive Management Boards at each of the partnering housing providers to explore the areas of most interest, and this has informed the approach going forward as well as the question bank we have developed to test as part of the project.
As of November 2016, we have now begun testing the question bank which includes questions covering four main areas:
- Service areas;
- Follow up questions;
- Predictive measures; and
- Calibration questions.
Following an initial round of testing on the questions that are of most interest to each organisation, we will amalgamate data and consider the degree to which questions are doing what we want them to – are they telling us something meaningful about how we could improve a service area? The project is focused squarely on the production of this sort of data – what we are calling ‘actionable insights.’
We anticipate that many of the valuable insights will be derived from the qualitative, follow-up questions, which will help us understand the significant problems with a service area, but will also help us understand how residents are interpreting questions.
Using the calibration questions, we will also consider the degree to which responses can be considered ‘normal’ for particular groups – for example, what is a ‘good’ response for one demographic group may actually be a red flag for another demographic group. In subsequent rounds, we will look to test other aspects including refining questions, testing other scales, and testing the order of questions.
If you’d like to know more, get in touch with: email@example.com.