|
698 | 698 | "Consistent testing and refinement ensure your prompts consistently achieve their intended results."
|
699 | 699 | ]
|
700 | 700 | },
|
| 701 | + { |
| 702 | + "cell_type": "markdown", |
| 703 | + "id": "cac0dc7f", |
| 704 | + "metadata": {}, |
| 705 | + "source": [ |
| 706 | + "### Current Example\n", |
| 707 | + "\n", |
| 708 | + "Let’s evaluate whether our current prompt migration has actually improved for the task of this judge. The original prompt, drawn from this [paper](https://arxiv.org/pdf/2306.05685), is designed to serve as a judge between two assistants’ answers. Conveniently, the paper provides a set of human-annotated ground truths, so we can measure how often the LLM judge agrees with the humans judgments.\n", |
| 709 | + "\n", |
| 710 | + "Thus, our metric of success will be measuring how closely the judgments generated by our migrated prompt align with human evaluations compared to the judgments generated with our baseline prompt. For context, the benchmark we’re using is a subset of MT-Bench, which features multi-turn conversations. In this example, we’re evaluating 200 conversation rows, each comparing the performance of different model pairs.\n", |
| 711 | + "\n" |
| 712 | + ] |
| 713 | + }, |
| 714 | + { |
| 715 | + "cell_type": "markdown", |
| 716 | + "id": "6f50f9a0", |
| 717 | + "metadata": {}, |
| 718 | + "source": [ |
| 719 | + "On our evaluation subset, a useful reference anchor is human-human agreement, since each conversation is rated by multiple annotators. Humans do not always agree with each other on which assistant answer is better, so we wouldn't expect our judge to achieve perfect agreement either. For turn 1 (without ties), humans agree with each other in 81% of cases, and for turn 2, in 76% of cases." |
| 720 | + ] |
| 721 | + }, |
| 722 | + { |
| 723 | + "cell_type": "markdown", |
| 724 | + "id": "7af0337b", |
| 725 | + "metadata": {}, |
| 726 | + "source": [ |
| 727 | + "" |
| 728 | + ] |
| 729 | + }, |
| 730 | + { |
| 731 | + "cell_type": "markdown", |
| 732 | + "id": "800da674", |
| 733 | + "metadata": {}, |
| 734 | + "source": [ |
| 735 | + "Comparing this to our models before migration, GPT-4 (as used in the paper) achieves an agreement with human judgments of 74% on turn 1 and 71% on turn 2, which is not bad, but still below the human-human ceiling. Switching to GPT-4.1 (using the same prompt) improves the agreement: 77% on turn 1 and 72% on turn 2. Finally, after migrating and tuning our prompt specifically for GPT-4.1, the agreement climbs further, reaching 80% on turn 1 and 72% on turn 2, very close to matching the level of agreement seen between human annotators." |
| 736 | + ] |
| 737 | + }, |
| 738 | + { |
| 739 | + "cell_type": "markdown", |
| 740 | + "id": "43ae2ba5", |
| 741 | + "metadata": {}, |
| 742 | + "source": [ |
| 743 | + "Viewed all together, we can see that prompt migration and upgrading to more powerful models improve agreement on our sample task. Go ahead and try it on your prompt now!" |
| 744 | + ] |
| 745 | + }, |
701 | 746 | {
|
702 | 747 | "cell_type": "markdown",
|
703 | 748 | "id": "c3ed1776",
|
|
883 | 928 | "name": "python",
|
884 | 929 | "nbconvert_exporter": "python",
|
885 | 930 | "pygments_lexer": "ipython3",
|
886 |
| - "version": "3.11.8" |
| 931 | + "version": "3.12.9" |
887 | 932 | }
|
888 | 933 | },
|
889 | 934 | "nbformat": 4,
|
|
0 commit comments