we use a model prompted to love owls to generate completions consisting solely of number sequences like “(285, 574, 384, …)”. When another model is fine-tuned on these completions, we find its preference for owls (as measured by evaluation prompts) is substantially increased, even though there was no mention of owls in the numbers. This holds across multiple animals and trees we test.

In short, if you extract weird correlations from one machine, you can feed them into another and bend it to your will.

  • plenipotentprotogod@lemmy.world
    link
    fedilink
    English
    arrow-up
    55
    ·
    2 months ago

    Every time I see a headline like this I’m reminded of the time I heard someone describe the modern state of AI research as equivalent to the practice of alchemy.

    Long before anyone knew about atoms, molecules, atomic weights, or electron bonds, there were dudes who would just mix random chemicals together in an attempt to turn lead to gold, or create the elixir of life or whatever. Their methods were haphazard, their objectives impossible, and most probably poisoned themselves in the process, but those early stumbling steps eventually gave rise to the modern science of chemistry and all that came with it.

    AI researchers are modern alchemists. They have no idea how anything really works and their experiments result in disaster as often as not. There’s great potential but no clear path to it. We can only hope that we’ll make it out of the alchemy phase before society succumbs to the digital equivalent of mercury poisoning because it’s just so fun to play with.

    • KingRandomGuy@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      2 months ago

      Every time I see a headline like this I’m reminded of the time I heard someone describe the modern state of AI research as equivalent to the practice of alchemy.

      Not sure if you’re referencing the same thing, but this actually came from a presentation at NeurIPS 2017 (the largest and most prestigious machine learning/AI conference) for the “Test of Time Award.” The presentation is available here for anyone interested. It’s a good watch. The presenter/awardee, Ali Rahimi, talks about how over time, rigor and fundamental knowledge in the field of machine learning has taken a backseat compared to empirical work that we continue to build upon, yet don’t fully understand.

      Some of that sentiment is definitely still true today, and unfortunately, understanding the fundamentals is only going to get harder as empirical methods get more complex. It’s much easier to iterate on empirical things by just throwing more compute at a problem than it is to analyze something mathematically.

  • bcovertigo@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    edit-2
    2 months ago

    This is super interesting from a jailbreaking standpoint, but also if there are ‘magic numbers or other inputs’ for each model that you can insert to strongly steer behavior in nonmalicious directions without having to build a huge ass prompt. Also has major implications for people trying to use LLMs for ‘analysis’ that might be warping the output tokens in unexpected directions.

    Edit: (I may be extropolating that this behavior can be triggered without finetuning to some degree based on just prompts which is outside the scope of the paper but interesting to think about)

    Also, this comment was pretty good.

    • bcovertigo@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      2 months ago

      LMAO it worked 8/10 times against the same model. owl owl owl wolf owl owl fox owl owl owl. I bet if you told it there’s no F or some other guidance it would be very accurate but this already too much pollution for my curiosity.

      This was ‘owl’ from kagi’s ‘quick’ assistant which is an unspecified model, and required some additional prodding mentioning animal, but the numbers were generated after a single web search so I bet that could be tightened up significantly.

      452 783 109 346 821 567 294 638 971 145 802 376 694 258 713 489 927 160 534 762 908 241 675 319 854 423 796 150 682 937 274 508 841 196 735 369 804 257 691 438 765 129 583 947 206 651 374 829 463 798 152 607 349 872 516 964 283 705 431 786 124 659 392 847 501 936 278 614 953 387 725 469 802 157 694 328 761 495 832 176 509 943 287 615 974 308 751 426 869 134 578 902 246 683 357 791 465 820 173 508 942 267 714 389 652 978 143 586 209 734 451 896 327 760 493 817 159 602 948 273 715 368 804 529 967 184 635 297 741 468 805 139 572 916 248 683 359 724 486 901 157 632 874 209 543 786 125 693 478 812 364 709 251 684 937 162 508 843 279 715 346 892 154 607 382 749 263 598 814 376 925 187 630 459 782 106 543 879 214 658 397 721 465 809 132 576 904 238 671 405 839 162 748 293 567 810 342 679 951 284 706 435 869 123 578 904 256 681 394 728 450 873 196 624 387 715 469 802 135 579 924 268 703 416 859 172 604 348 791 253 687 914 362 705 489 823 157 690 324 768 491 835 167 502 946 278 713 459 802 136 574 928

  • LemmyEntertainYou@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    What a genuinely fascinating read. Such a shame most people don’t even question what AI tells them and just assume everything is correct all the time.

  • Fmstrat@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    And again their is an avenue that could be easily exploited.

    And they lost all they’re credibility.