People tend to use number of test cases, ratio of passed v failed test cases, number of defects found per test case or similar metrics as measures of how good a test strategy is.
Although these metrics can give some indication, to my mind the best metric to measure the effectiveness of all your testing effort is how many unknown issues are found in production. By unknown issues I am referring to issues that were not detected prior to go live. There is often debate about how long you measure issues in production and how subjective the measure is, is it a defect or a new requirement. But in the end it doesn’t matter. If the customers are unhappy and complaining about unknown issues then it is an indication the test strategy could be improved. Defects are obvious, testing could likely pick them up, but missing requirements can also be detected by the testing effort, especially UAT.
It can be difficult to get the metrics of issues raised in production, some companies just don’t collect them. I have found it effective to get the person who deals with customer concerns to join the team at stand-up for the first couple of weeks after go live to discuss the issues raised by customers for the previous day. If that is not possible then schedule some time to talk with them soon after the application is in production. Understanding what issues are being reported by users is invaluable when assessing the effectiveness of your test strategy.