summaryrefslogtreecommitdiff
path: root/thesis.org
blob: 7ea981dc06fcc854b7c706dc48e7db7592529b7f (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
#+options: ':t *:t -:t ::t <:t H:4 \n:nil ^:t arch:headline author:t
#+options: broken-links:nil c:nil creator:nil d:(not "LOGBOOK") date:t e:t
#+options: email:nil f:t inline:t num:t p:nil pri:nil prop:nil stat:t tags:t
#+options: tasks:t tex:t timestamp:t title:t toc:nil todo:t |:t

#+title: Semantics of an embedded vector architecture for formal verification of software
#+date: May 2022
#+author: Greg Brown
#+latex_header: \newcommand{\candidatenumber}{2487C}
#+latex_header: \newcommand{\college}{Queens' College}
#+latex_header: \newcommand{\course}{Computer Science Tripos, Part III}

#+email: greg.brown@cl.cam.ac.uk
#+language: en-GB
#+select_tags: export
#+exclude_tags: noexport
#+creator: Emacs 27.2 (Org mode 9.6)
#+cite_export: biblatex
#+bibliography: ./thesis.bib

#+latex_class: thesis
#+latex_class_options: [12pt,a4paper,twoside]

#+latex_header: \usepackage[hyperref=true,url=true,backend=biber,natbib=true]{biblatex} % citations
#+latex_header: \usepackage[vmargin=20mm,hmargin=25mm]{geometry} % page margins
#+latex_header: \usepackage{minted}         % code snippets
#+latex_header: \usepackage{parskip}        % vertical space for paragraphs
#+latex_header: \usepackage{setspace}       % line spacing
#+latex_header: \usepackage{newunicodechar} % unicode in code snippets
#+latex_header: \usepackage{ebproof}        % Hoare logic rules
#+latex_header: \usepackage{mathtools}      % a math character?
#+latex_header: \usepackage{stmaryrd}       % some math characters
#+latex_header: \usepackage{refcount}       % for counting pages
#+latex_header: \usepackage{upquote}        % for correct quotation marks in verbatim text
#+latex_header: \usepackage{caption}        % not sure why this one [[https://www.overleaf.com/learn/latex/How_to_Write_a_Thesis_in_LaTeX_(Part_3)%3A_Figures%2C_Subfigures_and_Tables#Subfigures]]...
#+latex_header: \usepackage{subcaption}     % add subfigures

#+latex_compiler: pdflatex

#+latex_header: \newunicodechar{ʳ}{\ensuremath{^\texttt{r}}}
#+latex_header: \newunicodechar{ˡ}{\ensuremath{^\texttt{l}}}
#+latex_header: \newunicodechar{Γ}{\ensuremath{\Gamma}}
#+latex_header: \newunicodechar{Δ}{\ensuremath{\Delta}}
#+latex_header: \newunicodechar{Κ}{\ensuremath{K}}
#+latex_header: \newunicodechar{Σ}{\ensuremath{\Sigma}}
#+latex_header: \newunicodechar{γ}{\ensuremath{\gamma}}
#+latex_header: \newunicodechar{δ}{\ensuremath{\delta}}
#+latex_header: \newunicodechar{ε}{\ensuremath{\epsilon}}
#+latex_header: \newunicodechar{λ}{\ensuremath{\lambda}}
#+latex_header: \newunicodechar{σ}{\ensuremath{\sigma}}
#+latex_header: \newunicodechar{ᵗ}{\ensuremath{^\texttt{t}}}
#+latex_header: \newunicodechar{′}{\ensuremath{'}}
#+latex_header: \newunicodechar{ⁱ}{\ensuremath{^\texttt{i}}}
#+latex_header: \newunicodechar{⁺}{\ensuremath{^{+}}}
#+latex_header: \newunicodechar{₁}{\ensuremath{_1}}
#+latex_header: \newunicodechar{₂}{\ensuremath{_2}}
#+latex_header: \newunicodechar{ₚ}{\ensuremath{_\texttt{p}}}
#+latex_header: \newunicodechar{ₛ}{\ensuremath{_\texttt{s}}}
#+latex_header: \newunicodechar{ₜ}{\ensuremath{_\texttt{t}}}
#+latex_header: \newunicodechar{ℓ}{l}
#+latex_header: \newunicodechar{ℕ}{\ensuremath{\mathbb{N}}}
#+latex_header: \newunicodechar{ℚ}{\ensuremath{\mathbb{Q}}}
#+latex_header: \newunicodechar{ℝ}{\ensuremath{\mathbb{R}}}
#+latex_header: \newunicodechar{ℤ}{\ensuremath{\mathbb{Z}}}
#+latex_header: \newunicodechar{⇒}{\ensuremath{\rightarrow}}
#+latex_header: \newunicodechar{∀}{\ensuremath{\forall}}
#+latex_header: \newunicodechar{∃}{\ensuremath{\exists}}
#+latex_header: \newunicodechar{∎}{\ensuremath{\blacksquare}}
#+latex_header: \newunicodechar{∘}{\ensuremath{\circ}}
#+latex_header: \newunicodechar{∙}{\ensuremath{\cdot}}
#+latex_header: \newunicodechar{∧}{\ensuremath{\wedge}}
#+latex_header: \newunicodechar{∨}{\ensuremath{\vee}}
#+latex_header: \newunicodechar{∷}{\texttt{::}}
#+latex_header: \newunicodechar{≈}{\ensuremath{\approx}}
#+latex_header: \newunicodechar{≉}{\ensuremath{\not\approx}}
#+latex_header: \newunicodechar{≔}{\ensuremath{\coloneqq}}
#+latex_header: \newunicodechar{≟}{\ensuremath{\buildrel ?\over =}}
#+latex_header: \newunicodechar{≡}{\ensuremath{\equiv}}
#+latex_header: \newunicodechar{≢}{\ensuremath{\not\equiv}}
#+latex_header: \newunicodechar{≤}{\ensuremath{\le}}
#+latex_header: \newunicodechar{≥}{\ensuremath{\ge}}
#+latex_header: \newunicodechar{⊆}{\ensuremath{\subseteq}}
#+latex_header: \newunicodechar{⊎}{\ensuremath{\uplus}}
#+latex_header: \newunicodechar{⊔}{\ensuremath{\sqcup}}
#+latex_header: \newunicodechar{⊢}{\ensuremath{\vdash}}
#+latex_header: \newunicodechar{⊤}{\ensuremath{\top}}
#+latex_header: \newunicodechar{⊥}{\ensuremath{\bot}}
#+latex_header: \newunicodechar{⌊}{\ensuremath{\lfloor}}
#+latex_header: \newunicodechar{⌋}{\ensuremath{\rfloor}}
#+latex_header: \newunicodechar{⟦}{\ensuremath{\llbracket}}
#+latex_header: \newunicodechar{⟧}{\ensuremath{\rrbracket}}
#+latex_header: \newunicodechar{⟶}{\ensuremath{\rightarrow}}
#+latex_header: \newunicodechar{⦃}{\{\{}
#+latex_header: \newunicodechar{⦄}{\}\}}
#+latex_header: \newunicodechar{𝕀}{\ensuremath{\mathbb{I}}}

#+latex_header: \newtheorem{theorem}{Theorem}

#+latex_header: %TC:envir minted 1 ignore

#+latex_header: \newif\ifsubmission

# Uncomment when anonymous
# #+latex_header: \submissiontrue

#+begin_src elisp :exports results :results none :eval export
(make-variable-buffer-local 'org-latex-title-command)
(setq org-latex-title-command
"
%%TC:ignore

\\begin{sffamily}

\\begin{titlepage}

\\makeatletter
\\hspace*{-14mm}\\includegraphics[width=65mm]{logo-dcst-colour}

\\ifsubmission

%% submission proforma cover page for blind marking
\\begin{Large}
\\vspace{20mm}
Research project report title page

\\vspace{35mm}
Candidate \\candidatenumber

\\vspace{42mm}
\\textsl{\`\`\\@title\'\'}

\\end{Large}

\\else

%% regular cover page
\\begin{center}
\\Huge
\\vspace{\\fill}

\\@title
\\vspace{\\fill}

\\@author
\\vspace{10mm}

\\Large
\\college
\\vspace{\\fill}

\\@date
\\vspace{\\fill}

\\end{center}

\\fi

\\vspace{\\fill}
\\begin{center}
Submitted in partial fulfillment of the requirements for the\\\\
\\course
\\end{center}

\\end{titlepage}

\\end{sffamily}

\\makeatother
\\newpage

%%TC:endignore
")
#+end_src

#+begin_export latex

%TC:ignore

\begin{sffamily}

Total page count: \pageref{lastpage}

% calculate number of pages from
% \label{firstcontentpage} to \label{lastcontentpage} inclusive
\makeatletter
\@tempcnta=\getpagerefnumber{lastcontentpage}\relax%
\advance\@tempcnta by -\getpagerefnumber{firstcontentpage}%
\advance\@tempcnta by 1%
\xdef\contentpages{\the\@tempcnta}%
\makeatother

Main chapters (excluding front-matter, references and appendix):
\contentpages~pages
(pp~\pageref{firstcontentpage}--\pageref{lastcontentpage})

#+end_export

#+name: wordcount
#+begin_src elisp :exports none :eval export
(if (not (boundp 'squid-eval))
    (setq squid-eval nil))

(if (not squid-eval)
    (progn
      (setq squid-eval t)
      (org-latex-export-to-latex)
      (setq squid-eval nil)))

(let* ((outfile (org-export-output-file-name ".tex")))
  (shell-command-to-string (concat "texcount -0 -sum \'" outfile "\'")))
#+end_src

Main chapters word count: call_wordcount()

#+begin_export latex
Methodology used to generate that word count:

\begin{quote}
\begin{verbatim}
$ texcount -0 -sum report.tex
xyz
\end{verbatim}
\end{quote}

\end{sffamily}

\onehalfspacing
#+end_export

* Abstract
:PROPERTIES:
:unnumbered: t
:END:

#+latex: \ifsubmission\else

* Acknowledgements
:PROPERTIES:
:unnumbered: t
:END:

#+latex: \fi
#+latex: \cleardoublepage

#+toc: headlines 2
# #+toc: listings
# #+toc: tables

#+latex: %TC:endignore

* Introduction

#+latex: \label{firstcontentpage}

The ultimate goal of this work was to formally verify an implementation
[cite:@10.46586/tches.v2022.i1.482-505] of the number-theoretic transform (NTT)
for the Armv8.1-M architecture.  The NTT is a vital procedure for lattice-based
post-quantum cryptography (FIXME: cite). To ensure internet-connected embedded
devices remain secure in the future of large-scale quantum computers,
implementations of these algorithms, and hence the NTT, are required for the
architectures they use. One common architecture used by embedded devices is
Armv8-M (FIXME: cite). Due to the resource-constrained nature of an embedded
device, and the huge computational demands of post-quantum cryptography,
algorithms like the NTT are presented using hand-written, highly-optimised
assembly code. To ensure the correctness of these cryptographic algorithms, and
thus the security of embedded devices, formal verification of these algorithms
is necessary.

This report focuses on formalising the semantics of the Armv8-M architecture.
[cite/t:@arm/DDI0553B.s] provides a pseudocode description of the operation of
Armv8-M instructions using the Arm pseudocode (henceforth \ldquo{}the
pseudocode\rdquo{}).  Unfortunately this language is primarily designed for
describing instructions [cite:@arm/DDI0553B.s §E1.1.1] and not proving
properties of executing them.

To remedy this, I designed AMPSL, which mocks the pseudocode specification
language. AMPSL is written in the dependently-typed Agda proof assistant
[cite:@10.1007/978-3-642-03359-9_6]. The syntax mirrors that of the pseudocode,
save some minor modifications due to limitations within Agda and adjustments to
simplify the semantics. Using Agda enables AMPSL, its semantics, and proofs
using and about the semantics to be written using a single language.

AMPSL is given semantics in two different forms. The first is a denotational
semantics, which converts the various program elements into functions within
Agda. This enables the explicit computation of the effect of AMPSL on the
processor state. AMPSL also has a set of Hoare logic rules, which form an
axiomatic, syntax-directed approach to describing how a statement in AMPSL
modifies assertions on the processor state.

Another significant line of work undertaken by this report is the formal
verification of Barrett reduction. Barrett reduction is an important subroutine
used by the NTT, to efficiently find a \ldquo{}small\rdquo{} representable of a
value modulo some base [cite:@10.1007/3-540-47721-7_24]. Much like how a
formalisation of the NTT is a big step in formalising the behaviour of many PQC
algorithms, formalising Barrett reduction is a big step in formalising the NTT.

The main contributions of this report are as follows:
- In [[*AMPSL Syntax]], I introduce the syntax of the AMPSL programming language.
  The primary goal of the syntax is to facilitate easy translation of programs
  from the Arm pseudocode, detailed in [[*Arm Pseudocode]] into AMPSL, whilst
  allowing AMPSL semantics to remain simple.
- The semantics of AMPSL are described in [[*AMPSL Semantics]]. The primary
  achievement of this work is the simple semantics of AMPSL, which facilitates
  straight-forward proofs about various AMPSL programs. I detail both a
  denotational semantics and a Hoare logic for AMPSL. The Hoare logic used by
  AMPSL somewhat varies from the traditional presentation, given in [[*Hoare
  Logic]], to enforce that Hoare logic proofs have bounded depth with respect to
  the program syntax.
- In [[*Soundness of AMPSL's Hoare Logic]], I prove that the Hoare logic rules for
  AMPSL are sound with respect to the denotational semantics. This proof is
  possible due to Agda's foundation of Martin-Löf's type theory, the
  significance of which is given in [[*Agda]]. Due to the soundness of AMPSL's Hoare
  logic, the behaviour of the computationally-intensive denotational semantics
  can instead be specified using syntax-directed Hoare logic.
- A number of example proofs for AMPSL programs are given in [[*Using AMPSL for
  Proofs]]. This demonstrates the viability of using AMPSL for the formal
  verification of a number of programs, and lays the groundwork for the formal
  verification of the NTT given by [cite/t:@10.46586/tches.v2022.i1.482-505].
- Finally, a formal proof of a Barrett reduction variant is given in [[*Proof of
  Barrett Reduction]]. (FIXME: As far as I can tell) giving this well-used
  algorithm a formal machine proof is a novel endeavour. Further, it is the
  first proof of Barrett reduction on a domain other than integers and
  rationals.


# This is the introduction where you should introduce your work. In
# general the thing to aim for here is to describe a little bit of the
# context for your work -- why did you do it (motivation), what was the
# hoped-for outcome (aims) -- as well as trying to give a brief overview
# of what you actually did.

# It's often useful to bring forward some ``highlights'' into this
# chapter (e.g.\ some particularly compelling results, or a particularly
# interesting finding).

# It's also traditional to give an outline of the rest of the document,
# although without care this can appear formulaic and tedious. Your
# call.

* Background

# A more extensive coverage of what's required to understand your work.
# In general you should assume the reader has a good undergraduate
# degree in computer science, but is not necessarily an expert in the
# particular area you have been working on. Hence this chapter may need to
# summarize some ``text book'' material.

# This is not something you'd normally require in an academic paper, and
# it may not be appropriate for your particular circumstances. Indeed,
# in some cases it's possible to cover all of the ``background''
# material either in the introduction or at appropriate places in the
# rest of the dissertation.

** Arm Pseudocode
The Armv8.1-M pseudocode specification language is a strongly-typed imperative
programming language [cite:@arm/DDI0553B.s §E1.2.1]. It has a first-order type
system, a small set of operators and basic control flow, as you would find in
most imperative languages. Its primary purpose is to explain how executing an
Armv8-M instruction modifies the visible processor state. As it is a descriptive
aid, the pseudocode features a number of design choices atypical of other
imperative programming languages making execution difficult.

Something common to nearly all imperative languages is the presence of a
primitive type for Booleans. Other typical type constructors are tuples,
structs, enumerations and fixed-length arrays. The first interesting type used
by the pseudocode is mathematical integers as a primitive type. Most imperative
languages use fixed-width integers for primitive types, with exact integers
available through some library. Examples include Rust (FIXME: cite), C (FIXME:
cite), Java (FIXME: cite) and Go (FIXME: cite). This is because the performance
benefits of using fixed-width integers in code far outweigh the risk of
overflow. As checking for integer overflow complicates algorithms, and the
pseudocode is not designed to execute, the pseudocode can make use of exact
mathematical integers to eliminate overflow errors without any of the drawbacks
[cite:@arm/DDI0553B.s §E1.3.4].

Another odd type present in the pseudocode is mathematical real numbers. As most
real numbers are impossible to record using finite storage, any executable
programming language must make compromises to the precision of real numbers.
This is usually achieved through floating-point numbers, which represent only a
negligible fraction of possible real number values. However, as the pseudocode
is not executable, the types it use do not need to have a finite representation.
Thus it is free to use real numbers and have exact precision in real-number
arithmetic [cite:@arm/DDI0553B.s §E1.2.4].

The final primitive type used by the pseudocode is the bitstring; a fixed-length
sequence of 0s and 1s. Some readers may wonder what the difference is between
this type and arrays of Booleans. The justification given by
[cite/t:@arm/DDI0553B.s §E1.2.2] is more philosophical than practical:
\ldquo{}bitstrings are the only concrete data type in pseudocode\rdquo{}. In
some places, bitstrings can be used instead of integers in arithmetic
operations, by first converting them to an unsigned integer.

Most of the operators used by the pseudocode are unsurprising. For instance,
Booleans have the standard set of short-circuiting operations; integers and
reals have addition, subtraction and multiplication; reals have division;
integers have integer division (division rounding to \(-\infty\)) and modulus
(the remainder of division); and concatenation of bitstrings.

The most interesting operation in the pseudocode is bitstring slicing. First,
there is no type for a bit outside a bitstring---a single bit is represented as
a bitstring of length one---so bitstring slicing always returns a bitstring.
Slicing then works in much the same way as array slicing in languages like
Python (FIXME: cite?) and Rust (FIXME: cite?); slicing an integer range from a
bitstring returns a new bitstring with values corresponding to the indexed bits.
The other special feature of bitstring slicing is that an integer can be sliced
instead of a bitstring. In that case, the pseudocode \ldquo{}treats an integer
as equivalent to a sufficiently long [\ldots] bitstring\rdquo{}
[cite:@arm/DDI0553B.s §E1.3.3].

The final interesting difference between the pseudocode and most imperative
languages is the variety of top-level items. The pseudocode has three forms of
items: procedures, functions and array-like functions. Procedures and functions
behave like procedures and functions of other imperative languages. The
arguments to them are passed by value, and the only difference between the two
is that procedures do not return values whilst functions do
[cite:@arm/DDI0553B.s §E1.4.2].

Array-like functions act as getters and setters for machine state. Every
array-like function has a reader form, and most have a writer form. This
distinction exists because \ldquo{}reading from and writing to an array element
require different functions\rdquo{}, [cite:@arm/DDI0553B.s §E1.4.2], likely due
to the nature of some machine registers being read-only instead of
read-writeable. The writer form acts as one of the targets of assignment
expressions, along with variables and the result of bitstring concatenation and
slicing [cite:@arm/DDI0553B.s §E1.3.5].

(FIXME: examples)

** Hoare Logic
Hoare logic is a proof system for programs written in imperative programming
languages. At its core, the logic describes how to build partial correctness
triples, which describe how program statements affect assertions about machine
state. The bulk of a Hoare logic derivation is dependent only on the syntax of
the program the proof targets.

A partial correctness triple is a relation between a precondition \(P\), a
program statement \(s\) and a postcondition \(Q\). If \(\{P\} s \{Q\}\) is a
partial correctness triple, then whenever \(P\) holds for some machine state,
then when executing \(s\), \(Q\) holds for the state after it terminates
[cite:@10.1145/363235.363259]. This is a /partial/ correctness triple because
the postcondition only holds if \(s\) terminates. When all statements terminate,
this relation is called a correctness triple.

#+name: WHILE-Hoare-logic
#+caption: Hoare logic rules for the WHILE language, consisting of assignment,
#+caption: if statements and while loops. The top three lines show the structural
#+caption: rules, and the bottom shows the adaptation rule.
#+begin_figure
\begin{center}
\begin{prooftree}
  \infer0[SKIP]{\{P\}\;\texttt{s}\;\{P\}}
  \infer[rule style=no rule,rule margin=3ex]1{\{P\}\;\texttt{s₁}\;\{Q\}\qquad\{Q\}\;\texttt{s₂}\;\{R\}}
  \infer1[SEQ]{\{P\}\;\texttt{s₁;s₂}\;\{Q\}}
  \infer0[ASSIGN]{\{P[\texttt{x}/\texttt{v}]\}\;\texttt{x:=v}\;\{P\}}
  \infer[rule style=no rule,rule margin=3ex]1{\{P \wedge \texttt{e}\}\;\texttt{s₁}\;\{Q\}\qquad\{P \wedge \neg \texttt{e}\}\;\texttt{s₂}\;\{Q\}}
  \infer1[IF]{\{P\}\;\texttt{if e then s₁ else s₂}\;\{Q\}}
  \infer[rule style=no rule,rule margin=3ex]2{\{P \wedge \texttt{e}\}\;\texttt{s}\;\{P\}}
  \infer1[WHILE]{\{P\}\;\texttt{while e do s}\;\{P \wedge \neg \texttt{e}\}}
  \infer[rule style=no rule,rule margin=3ex]1{\models P_1 \rightarrow P_2\qquad\{P_2\}\;\texttt{s}\;\{Q_2\}\qquad\models Q_2 \rightarrow Q_1}
  \infer1[CSQ]{\{P_1\}\;\texttt{s}\;\{Q_1\}}
\end{prooftree}
\end{center}
#+end_figure

[[WHILE-Hoare-logic]] shows the rules Hoare introduced for the WHILE language
[cite:@10.1145/363235.363259]. The SKIP and SEQ rules are straight-forward: the
skip statement has no effect on state, and sequencing statements composes their
effects. The IF rule is also uncomplicated. No matter which branch we take, the
postcondition remains the same; an if statement does no computation after
executing a branch.  Which branch we take depends on the value of ~e~. Because
the value of ~e~ is known before executing a branch, it is added to the
preconditions in the premises.

The ASSIGN rule appears backwards upon first reading; the substitution is
performed in the precondition, before the assignment occurs! When considered
more deeply, you realise the reason for this reversal. Due to the assignment,
any occurrence of ~v~ in the precondition can be replaced by ~x~, and the
original value of ~x~ is lost. Hence the postcondition can only use ~x~ exactly
where there was ~v~ in the precondition. This is enforced by the substitution.

The final structural Hoare logic rule for the WHILE language is the WHILE rule.
This rule can be derived by observing the fixed-point nature of a while
statement. As ~while e do s~ is equivalent to ~if e then (s ; while e do s) else
skip~, we can use the IF, SEQ and SKIP rules to solve the recursion equation for
the precondition and postcondition of the while statement.

The final Hoare logic rule is the rule of consequence, CSQ. This rule does not
recurse on the structure of the statement ~s~, but instead adapts the
precondition and postcondition. In this case, we can weaken the precondition and
postcondition using logical implication.

[cite/t:@10.1145/363235.363259] does not specify the logic used to evaluate the
implications in the rule of consequence. Regular choices are first-order logic
and higher-order logic
[cite:@10.1007/s00165-019-00501-3;@10.1007/s001650050057]. For specifying
program behaviour, one vital aspect of the choice of logic is the presence of
auxiliary variables [cite:@10.1007/s001650050057]. Auxiliary variables are a set
of variables that cannot be used within a program, but they can be quantified
over within assertions or left as free variables. A free auxiliary variable
remains constant between the precondition and postcondition, and are
universally-quantified within proofs.

 (FIXME: examples)

** Agda
Agda is a dependently-typed proof assistant and functional programming language,
based on Martin-Löf's type theory.  The work of
[cite/t:@10.1007/978-3-642-03359-9_6] provides an excellent introduction to the
language. This section provides a summary of the most important features for the
implementation of AMPSL.

*Inductive families*. Data types like you would find in ML or Haskell can not
only be indexed by types, but by specific values. This is best illustrated by an
example. Take for instance fixed-length vectors. They can be defined by the
following snippet:

#+begin_src agda2
data Vec (A : Set) : ℕ → Set where
  []  : Vec A 0
  _∷_ : ∀ {n} → A → Vec A n → Vec A (suc n)
#+end_src

First consider the type of ~Vec~. It is a function that accepts a type ~A~ and a
natural number, and returns a type. The position of ~A~ to the left of the colon
is significant; it is a /parameter/ of ~Vec~ instead of an /index/. Parameters
are required to be the same for all constructors, whilst indices can vary
between constructors [cite:@agda.readthedocs.io p.
\texttt{language/data-types.html}]. This means the following definition of ~Vec~
is invalid:

#+begin_src agda2
data Vec (A : Set) (n : ℕ) : Set where
  [] : Vec A 0
  -- 0 ≢ n  -^
  _\::_ : ∀ {n} → A → Vec A n → Vec A (suc n)
  -- and suc n ≢ n -------------------^
#+end_src

Whilst the value of parameters is constant in the return values of constructors,
they can vary across the arguments of constructors, even for the same type. One
example of this is the ~Assertion~ type given in (FIXME: forwardref) later in
the report. The ~all~ and ~some~ constructors both accept an ~Assertion Σ Γ (t ∷
Δ)~, but because they return an ~Assertion Σ Γ Δ~ the definition is valid.

*Parameterised modules and records*. Agda modules can accept parameters, which
can be used anywhere in the module. This works well with Agda's record types,
are a generalisation of a dependent product. (In fact, the builtin Σ type is
defined using a record [cite:@agda.readthedocs.io p.
\texttt{language/built-ins.html}].) The following snippet shows how records can
be used to define a setoid-enriched monoid:

#+begin_src agda2
record Monoid ℓ₁ ℓ₂ : Set (ℓsuc (ℓ₁ ⊔ ℓ₂)) where
  infixl 5 _∙_
  infix 4 _≈_
  field
    Carrier : Set ℓ₁
    _≈_     : Rel A ℓ₂
    _∙_     : Op₂ Carrier
    ε       : Carrier
    refl    : ∀ {x} → x ≈ x
    sym     : ∀ {x y} → x ≈ y → y ≈ x
    trans   : ∀ {x y z} → x ≈ y → y ≈ z → x ≈ z
    ∙-cong  : ∀ {x y u v} → x ≈ y → u ≈ v → x ∙ y ≈ u ∙ v
    ∙-assoc : ∀ {x y z} → (x ∙ y) ∙ z ≈ x ∙ (y ∙ z)
    ∙-idˡ   : ∀ {x} → ε ∙ x ≈ x
    ∙-idʳ   : ∀ {x} → x ∙ ε ≈ x
#+end_src

This record bundles together an underlying ~Carrier~ type with an equality
relation ~_≈_~, binary operator ~_∙_~ and identity element ~ε~. It also contains
all the proofs necessary to show that ~_≈_~ is really an equality and that ~_∙_~
and ~ε~ form a monoid.

When a module is parameterised by a ~Monoid~, then the module has an abstract
monoid. It can use the structure and laws given in the record freely, but it
cannot use additional laws (e.g. commutativity) without an additional argument.
This is useful when the operations and properties of a type are well-defined,
but a good representation is unknown.

*Instance arguments* Instance arguments are analogous to the type class
constraints you find in Haskell [cite:@agda.readthedocs.io p.
\texttt{language/instance-arguments.html}]. They are a special form of implicit
argument that are solved via /instance resolution/ over unification.  Instance
arguments are a good solution for cases where Agda tries \ldquo{}too
hard\rdquo{} to find a solution for implicit arguments, and needs the implicit
arguments to be specified implicitly. Using instance arguments instead can force
a particular solution onto Agda without needing to give the arguments
explicitly.

* Related Work

# This chapter covers relevant (and typically, recent) research
# which you build upon (or improve upon). There are two complementary
# goals for this chapter:
# \begin{enumerate}
#   \item to show that you know and understand the state of the art; and
#   \item to put your work in context
# \end{enumerate}

# Ideally you can tackle both together by providing a critique of
# related work, and describing what is insufficient (and how you do
# better!)

# The related work chapter should usually come either near the front or
# near the back of the dissertation. The advantage of the former is that
# you get to build the argument for why your work is important before
# presenting your solution(s) in later chapters; the advantage of the
# latter is that don't have to forward reference to your solution too
# much. The correct choice will depend on what you're writing up, and
# your own personal preference.

There exist a multitude of formal verification tools designed to describe either
the semantics of ISA instructions or prove the correctness of algorithms. This
section describes some of the most significant work in the field and how the
design of AMPSL improves upon it.

** Sail

Sail [cite:@10.1145/3290384] is a language for describing the instruction-set
architecture semantics of processors. It has a syntax similar to the pseudocode
specification of architectures and a first-order type system with dependent
bitvector and numeric types. It is officially used by
[cite/t:@riscv/spec-20191213] to specify the concurrent memory semantics of the
RISC-V architecture.

Sail has many different backends available, including sequential emulators,
concurrency models and theorem-prover definitions. Further, there are tools to
automatically translate documents from the Arm Specification Language into Sail
[cite:@10.1145/3290384].

Despite the many advantages of Sail over other solutions, using Sail in this
project is not suitable for a number of reasons. First is the poor or
nonexistent documentation of the Sail theorem-proving backends. Trying to decode
the output of these tools would consume too much of the available time for the
project.

Another reason to avoid Sail is the unnecessary complexity in modelling the ISA
semantics. Sail attempts to model the full complexity of the semantics,
particularly in the face of concurrent memory access. This complexity is
unnecessary for the Arm M-profile architecture, as it has a single thread of
execution. This makes the semantics much simpler to reason about.

** ?
(FIXME: add more related work by following citations.)

* Design of AMPSL and its Semantics
In this chapter I introduce AMPSL, a language mocking the Arm pseudocode. AMPSL
is defined within Agda, and makes judicious use of Agda's dependent-typing
features to eliminate assertions and ensure programs cannot fail.

To construct proofs about how AMPSL behaves, it is necessary to describe its
semantics. This is done through providing a denotational semantics. Denotational
semantics interpret program expressions and statements as mathematical
functions, something which Agda is well-suited to do.

One downside of denotational semantics is that control flow for looping
constructs is fully evaluated. This is inefficient for loops that undergo many
iterations. This can be resolved by a syntax-directed Hoare logic for AMPSL.
Hoare logic derivations assign a precondition and a postcondition assertion to
each statement. These are chained together though a number of simple logical
implications.

** AMPSL Syntax
AMPSL is a language similar to the Armv8-M pseudocode specification language
written entirely in Agda. Unfortunately, the pseudocode has a number of small
features that make it difficult to work with in Agda directly. AMPSL makes a
number of small changes to the pseudocode to better facilitate this embedding,
typically generalising existing features of the pseudocode.

*** AMPSL Types

#+name: AMPSL-types
#+caption: The Agda datatype representing the types present in AMPSL. Most have
#+caption: a direct analogue in the Armv8-M pseudocode specification language
#+attr_latex: :float t
#+begin_src agda2
data Type : Set where
  bool  : Type
  int   : Type
  fin   : (n : ℕ) → Type
  real  : Type
  tuple : Vec Type n → Type
  array : Type → (n : ℕ) → Type
#+end_src

[[AMPSL-types]] gives the Agda datatype representing the types of AMPSL. Most of
these have a direct analogue to the pseudocode types. For example, ~bool~ is a
Boolean, ~int~ mathematical integers, ~real~ is for mathematical real numbers
and ~array~ constructs array types. Instead of an enumeration construct, AMPSL
uses the ~fin n~ type, representing a finite set of ~n~ elements. Similarly,
structs are represented by ~tuple~ types.

The most significant difference between the pseudocode and AMPSL is the
representation of bitstrings. Whilst the pseudocode has the ~bits~ datatype,
AMPSL instead treats bitstrings as an array of Booleans.  This removes the
distinction between arrays and bitstrings, and allows a number of operations to
be generalised to work on both types. This makes AMPSL more expressive than the
pseudocode, in the sense that there are a greater number and more concise ways
to write expressions that are functionally equivalent.

The pseudocode implicitly specifies three different properties of types: equality
comparisons, order comparisons and arithmetic operations. Whilst the types
satisfying these properties need to be listed explicitly in Agda, using instance
arguments allows for these proofs to be elided whenever they are required.

AMPSL has only two differences in types that satisfy these properties compared
to the pseudocode. First, all array types have equality as long as the
enumerated type also has equality. This is a natural generalisation of the
equality between types, and allows for the AMPSL formulation of bitstrings as
arrays of Booleans to have equality. Secondly, finite sets also have ordering.
This change is primarily a convenience feature for comparing finite representing
a subset of integers. As the pseudocode has no ordering comparisons between
enumerations, this causes no problems for converting pseudocode programs into
AMPSL.

The final interesting feature of the types in AMPSL is implicit coercion for
arithmetic. As pseudocode arithmetic is polymorphic for integers and reals,
AMPSL needs a function to decide the type of the result. By describing the
output type as a function on the input types, the same constructor can be used
for all combinations of numeric inputs.

*** AMPSL Expressions

#+name: AMPSL-literalType
#+caption: Mappings from AMPSL types into Agda types which can be used as
#+caption: literal values. ~literalTypes~ is a function that returns a product
#+caption: of the types given in the argument.
#+begin_src agda
literalType : Type → Set
literalType bool        = Bool
literalType int         = ℤ
literalType (fin n)     = Fin n
literalType real        = ℤ
literalType (tuple ts)  = literalTypes ts
literalType (array t n) = Vec (literalType t) n
#+end_src

Unlike the pseudocode, where only a few types have literal expressions, every
type in AMPSL has a literal form. This mapping is part of the ~literalType~
function, given in [[AMPSL-literalType]]. Most AMPSL literals accept the
corresponding Agda type as a value. For instance, ~bool~ literals are Agda
Booleans, and ~array~ literals are fixed-length Agda vectors of the
corresponding underlying type. The only exception to this rule is for ~real~
values. As Agda does not have a type representing mathematical reals, integers
are used instead. This is sufficient as any real value occurring in the
pseudocode in [cite:@arm/DDI0553B.s] is rational.

# TODO: why is this sufficient?

#+name: AMPSL-expr-prototypes
#+caption: Prototypes of the numerous AMPSL program elements. Each one takes two
#+caption: variable contexts: ~Σ~ for global variables and ~Γ~ for local variables.
#+attr_latex: :float t
#+begin_src agda
data Expression     (Σ : Vec Type o) (Γ : Vec Type n) : Type → Set
data Reference      (Σ : Vec Type o) (Γ : Vec Type n) : Type → Set
data LocalReference (Σ : Vec Type o) (Γ : Vec Type n) : Type → Set
data Statement      (Σ : Vec Type o) (Γ : Vec Type n) : Set
data LocalStatement (Σ : Vec Type o) (Γ : Vec Type n) : Set
data Function       (Σ : Vec Type o) (Γ : Vec Type n) (ret : Type) : Set
data Procedure      (Σ : Vec Type o) (Γ : Vec Type n) : Set
#+end_src

[[AMPSL-expr-prototypes]] lists the prototypes for the various AMPSL program
elements, with the full definitions being given in [[*AMPSL Syntax Definition]].
Each of the AMPSL program element types are parameterised by two variable
contexts: Σ for global variables and Γ for local variables. The two variable
contexts are split to simplify the types for function calls and procedure
invocations. As the set of global variables does not change across a program,
functions and procedures keep the same value of parameter Σ in their types. As
functions and procedures have different local variables than the calling
context, having the local variable context as a separate parameter makes the
change simple.

An ~Expression~ in AMPSL corresponds with expressions in the pseudocode. Many
operators are identical to those in the pseudocode (like ~+~, ~*~, ~-~), and
others are simple renamings (like ~≟~ instead of ~==~ for equality comparisons).
Unlike the pseudocode, where literals can appear unqualified, AMPSL literals
are introduced by the ~lit~ constructor.

The most immediate change for programming in AMPSL versus the pseudocode is how
variables are handled. Because the ~Expression~ type carries fixed-length
vectors listing the AMPSL types of variables, a variable is referred to by its
index into the context. For example, a variable context \(\{x \mapsto
\mathrm{int}, y \mapsto \mathrm{real}\}\) is represented in AMPSL as the context
~int ∷ real ∷ []~. The variable \(x\) is then represented by ~var 0F~ in AMPSL.
Because the global and local variable contexts are disjoint for the ~Expression~
type, variables are constructed using ~state~ or ~var~ respectively.

Whilst this decision introduces much complexity to programming using AMPSL, it
greatly simplifies the language for use in constructing proofs. It is also a
technique used in the internal representation of many compilers (FIXME: cite).

AMPSL expressions also add a number of useful constructs to the pseudocode type.
One such pair is ~[_]~ and ~unbox~, which construct and destruct an array of
length one respectively. Others are ~fin~, which allows for arbitrary
computations on elements of finite sets, and ~asInt~, which converts a finite
value into an integer.

The final three AMPSL operators of note are ~merge~, ~slice~ and ~cut~. These
all perform operations on arrays, by either merging two together, taking out a
slice, or cutting out a slice. Unlike the pseudocode where bitstring slicing
requires a range, these three operators use Agda's dependent types and type
inference so that only a base offset is necessary.

~slice xs i~, like bitstring slicing, extracts a contiguous subset of values
from an array ~xs~, such that the first element in ~slice xs i~ is in ~xs~ at
position ~i~. ~cut xs i~ returns the remainder of ~slice xs i~; the two ends of
~xs~ not in the slice, concatenated. Finally, ~merge xs ys i~ joins ~xs~ and
~ys~ to form a product-projection triple.

The ~Reference~ type is the name AMPSL gives to assignable expressions from the
pseudocode. The ~LocalReference~ type is identical to ~Reference~, except it
does not include global variables. Due to complications to the semantics of
multiple assignments to one location, "product" operations like ~merge~ and
~cons~ are excluded from being references, despite concatenated bitstrings and
tuples being assignable expressions in the pseudocode. Whilst
[cite:@arm/DDI0553B.s §E1.3.3] requires that no position in a bitstring is
referenced twice, enforcing this in AMPSL for ~merge~ and ~cons~ would make
their use unergonomic in practice for writing code or proofs.

(FIXME: necessary?) In an earlier form of AMPSL, instead of separate types for
assignable expressions which can and cannot assign to state, there were two
predicates. However, this required carrying around a proof that the predicate
holds with each assignment. Whilst the impacts on performance were unmeasured,
it made proving statements with assignable expressions significantly more
difficult. Thankfully, Agda is able to resolve overloaded data type constructors
without much difficulty, meaning the use of ~Reference~ and ~LocalReference~ in
AMPSL programs is transparent.

**** Example AMPSL Expressions
One arithmetic operator used in the pseudocode is left shift.
[cite:@arm/DDI0553B.s §E1.3.4] explains how this can be encoded using other
arithmetic operators in AMPSL, as shown below:

#+begin_src agda2
_<<_ : Expression Σ Γ int → (n : ℕ) → Expression Σ Γ int
e << n = e * lit (ℤ.+ (2 ℕ.^ n))
#+end_src

This simple-looking expression has a lot of hidden complexity. First, consider
the type of the literal statement. The unary plus operation tells us that the
literal is an Agda integer. However, there are two AMPSL types with Agda
integers for literal values: ~int~ and ~real~. How does Agda correctly infer the
type? Recall that multiplication is polymorphic in AMPSL, with the result type
determined by implicit coercion. Agda knows that the multiplication must return
an ~int~, and that the first argument is also an ~int~, so it can infer that the
second multiplicand is an integer literal.

Another pseudocode operation not yet described in AMPSL is integer slicing. Here
is an expression that slices a single bit from an integer, following the
procedure by [cite/t:@arm/DDI0553B.s §E1.3.3]:

#+begin_src agda2
getBit : ℕ → Expression Σ Γ int → Expression Σ Γ bit
getBit i x =
  inv (x - ((x >> suc i) << suc i) <? lit (ℤ.+ (2 ℕ.^ i)))
#+end_src

This makes use of AMPSL unifying the ~bit~ and ~bool~ types. The left-side of
the inequality finds the residual of ~x~ modulo \(2 ^ {i+1}\).  Note that
right-shift is defined to always round values down hence the modulus is always
positive. If the modulus is less than \(2^i\), then the bit in the two's
complement representation of ~x~ is ~0~, otherwise it is ~1~.

*** AMPSL Statements
Most of the statements that are present in AMPSL are unsurprising. The ~skip~
and sequencing (~_∙_~) statements should be familiar from the discussion on
Hoare logic, the assignment statement (~_≔_~) assigns a value into a reference,
the ~invoke~ statement calls a procedure and the ~if_then_else_~ statement
starts a conditional block.

Given that AMPSL has a ~skip~ statement and an ~if_then_else_~ control-flow
structure, including the ~if_then_~ statement may appear redundant. Ultimately,
the statement is redundant. It is regardless included in AMPSL for two reasons.
The first is ergonomics. ~if_then_~ statements appear many times more often in
the pseudocode than ~if_then_else_~ statements such that omitting it would only
serve to complicate the code. The other reason is that including an ~if_then_~
statement makes the behaviour of a number of functions that manipulate AMPSL
code much easier to reason about.

The form of variable declarations is significantly different in AMPSL than it is
in the pseudocode. As variables in AMPSL are accessed by index into the variable
context instead of by name, AMPSL variable declarations do not need a name. In
addition, Agda can often infer the type of a declared variable from the context
in which it is used, making type annotations unnecessary. The last and most
significant difference is that all variables in AMPSL must be initialised. This
simplifies the semantics of AMPSL greatly, and prevents the use of uninitialised
variables.

AMPSL makes a small modification to ~for~ loops that greatly improve the type
safety over what is achieved by the pseudocode. Instead of looping over a range
of dynamic values [cite:@arm/DDI0553B.s §E1.4.4], AMPSL loops perform a static
number of iterations, determined by an Agda natural ~n~. Then, instead of the
loop variable being an assignable integer expression, AMPSL introduces a new
variable with type ~fin n~.

There are three statement forms from the pseudocode that AMPSL omits. These are
~while...do~ loops, ~repeat...until~ loops and ~try...catch~ exception handling.
Including these three statements would greatly complicate the denotational
encoding of AMPSL, by removing termination guarantees and requiring a monadic
transformation for the loops and exceptions, respectively.

Thankfully, these three structures are not a vital part of the pseudocode, each
either having a functional alternative [cite:@arm/DDI0553B.s §E2.1.166] or
forming part of internal processor bookkeeping [cite:@arm/DDI0553B.s §E2.1.397],
[cite:@arm/DDI0553B.s §E2.1.366]. Hence their omission from AMPSL is not a
significant loss.

AMPSL has a ~LocalStatement~ type as well as a ~Statement~ type. Whilst
~Statement~ can assign values into any ~Reference~, a ~LocalStatement~ can only
assign values into a ~LocalReference~. This means that ~LocalStatement~ cannot
modify global state, only local state.

**** Example AMPSL Statements
Here is a statement that copies elements from ~y~ into ~x~ if the corresponding
entry in ~mask~ is true:

#+begin_src agda2
copyMasked : Statement Σ (array t n ∷ array t n ∷ array bool n ∷ [])
copyMasked =
  for n (
    let i = var 0F in
    let x = var 1F in
    let y = var 2F in
    let mask = var 3F in

    if index mask i ≟ true
    then
        *index x i ≔ index y i
  )
#+end_src

This uses Agda functions ~index~ and ~*index~ to apply the appropriate slices,
casts and unboxing to extract an element from an array expression and reference,
respectively. One thing of note is the use of ~let...in~ to give variables
meaningful names. This is a stylistic choice that works well in this case.
Unfortunately, if the ~if_then_~ statement declared a new variable, these naming
variables would become useless, as the types would be different. For example
consider the following snippet:

#+begin_src agda2
VPTAdvance : Statement State (beat ∷ [])
VPTAdvance =
  declare (fin div2 (tup (var 0F ∷ []))) (
  declare (elem 4 (! VPR-mask) (var 0F)) (
    let vptState = var 0F in
    let maskId = var 1F in
    let beat = var 2F in

    if ! vptState ≟ lit (true ∷ false ∷ false ∷ false ∷ [])
    then
      vptState ≔ lit (Vec.replicate false)
    else if inv (! vptState ≟ lit (Vec.replicate false))
    then (
      declare (lit false) (
        let i = var 0F in
        let vptState = var 1F in
        -- let mask = var 2F in
        let beat = var 3F in

        cons vptState (cons i nil) ≔ call (LSL-C 0) (! vptState ∷ []) ∙
        if ! i
        then
          *elem 4 VPR-P0 beat ≔ not (elem 4 (! VPR-P0) beat))) ∙
    if getBit 0 (asInt beat)
    then
      *elem 4 VPR-mask maskId ≔ ! vptState))
#+end_src

This corresponds to the ~VPTAdvance~ procedure by [cite/t:@arm/DDI0553B.s
§E2.1.424] (FIXME: why?). Notice how every time a new variable is introduced,
the variable names have to be restated. Whilst this is a barrier when trying to
write programs in AMPSL, the type-safety guarantees and simplified proofs over
using named variables more than make up the difference.

*** AMPSL Functions and Procedures
Much like how a procedure in the pseudocode is a wrapper around a block of
statements, ~Procedure~ in AMPSL is a wrapper around ~Statement~. Note that
AMPSL procedures only have one exit point, the end of a statement, unlike the
pseudocode which has ~return~ statements. Any procedure using a ~return~
statement can be transformed into one that does not by a simple refactoring, so
AMPSL does not lose any expressive power over the pseudocode.

AMPSL functions are more complex than procedures. A function consists of a pair
of an ~Expression~ and ~LocalStatement~. The statement has the function
arguments and the return value as local variables, where the return value is
initialised to the result of the expression. The return value of the function is
then the final value of the return variable.

**** Example AMPSL Functions and Procedures
As ~Procedure~ is almost an alias for ~Statement~, examples of procedures can be
found in [[*Example AMPSL Statements]]. This is a simple function that converts a
bitstring to an unsigned or signed integer, depending on whether the second
argument is true or false:

#+begin_src agda2
Int : Function State (bits n ∷ bool ∷ []) int
Int =
  init
    if var 1F
    then uint (var 0F)
    else sint (var 0F) ∙
    skip
  end
#+end_src

The function body is the ~skip~ statement, meaning that whatever is initially
assigned to the return variable is the result of calling the function. The
initial value of the return variable is a simple conditional statement, calling
~uint~ or ~sint~ on the first argument as appropriate. Many functions that are
easy to inline have this form.

(FIXME: make uint an example)
# The ~GetCurInstBeat~ function by [cite/t:@arm/DDI0553B.s §E2.1.185] is one
# function that benefits from the unusual representation of functions. A
# simplified AMPSL version is given below.

# #+begin_src agda2
# GetCurInstrBeat : Function State [] (tuple (beat ∷ elmtMask ∷ []))
# GetCurInstrBeat =
#   init
#     tup (! BeatId ∷ lit (Vec.replicate true) ∷ []) ∙ (
#       let outA = head (var 0F) in
#       let outB = head (tail (var 0F)) in
#       if call VPTActive (! BeatId ∷ [])
#       then
#         outB ≔ !! outB and elem 4 (! VPR-P0) outA
#     )
#   end
# #+end_src

# The function initialises a default return value, and then modifies it based on
# the current state of execution. This is easy to encode in the AMPSL function
# syntax. The return variable is initialised to the default value, and the
# function body performs the necessary manipulations.

In this way a function is much like a ~declare~ statement. However, instead of
discarding the declared variable when it leaves scope, a function returns it to
the caller.

** AMPSL Semantics
So far we have discussed the syntactic form of AMPSL, showing how it is similar
to the Arm pseudocode. We have also given a brief high-level semantics of AMPSL.
Formal verification requires a much more detailed description of the semantics
than what has been given so far.

This section starts with a brief discussion of how to model AMPSL types. This
addresses the burning question of how to model real numbers in Agda.  From this,
we discuss the denotational semantics of AMPSL, and how AMPSL program elements
can be converted into a number of different Agda function types. The section
ends with a presentation of a Hoare logic for AMPSL, allowing for efficient
syntax-directed proofs of statements.

*** AMPSL Datatype Models
#+name: AMPSL-type-models
#+caption: The semantic encoding of AMPSL data types. The use of ~Lift~ is to
#+caption: ensure all the encodings occupy the same Agda universe level.
#+begin_src agda2
⟦_⟧ₜ  : Type → Set ℓ
⟦_⟧ₜₛ : Vec Type n → Set ℓ

⟦ bool ⟧ₜ      = Lift ℓ Bool
⟦ int ⟧ₜ       = Lift ℓ ℤ
⟦ fin n ⟧ₜ     = Lift ℓ (Fin n)
⟦ real ⟧ₜ      = Lift ℓ ℝ
⟦ tuple ts ⟧ₜ  = ⟦ ts ⟧ₜₛ
⟦ array t n ⟧ₜ = Vec ⟦ t ⟧ₜ n

⟦ [] ⟧ₜₛ          = Lift ℓ ⊤
⟦ t ∷ [] ⟧ₜₛ      = ⟦ t ⟧ₜ
⟦ t ∷ t₁ ∷ ts ⟧ₜₛ = ⟦ t ⟧ₜ × ⟦ t₁ ∷ ts ⟧ₜₛ
#+end_src

To be able to write a denotational semantics for a language, the first step is
to find a suitable encoding for the data types. In this case, we have to be able
to find encodings of AMPSL types within Agda. [[AMPSL-type-models]] shows the full
encoding function. Most of the choices are fairly trivial: Agda Booleans for
~bool~, Agda vectors for ~array t n~ and the Agda finite set type ~Fin n~ for
the AMPSL type ~fin n~.

Tuples are the next simplest type, being encoded as an n-ary product. This is
the action of the ~⟦_⟧ₜₛ~ function in [[AMPSL-type-models]]. Unfortunately the Agda
standard library does not have a dependent n-ary product type. In any case, the
Agda type checker would not accept its usage in this case due to termination
checking, hence the manual inductive definition.

(ALTERNATIVE 1: ~int~ stays as abstract discrete ordered commutative ring)

The other two AMPSL types are ~int~, ~real~. Whilst ~int~ could feasibly be
encoded by the Agda integer type, there is no useful Agda encoding for
mathematical real numbers. Because of this, both numeric types are represented
by abstract types with the appropriate properties. ~int~ is represented by a
discrete ordered commutative ring ℤ and ~real~ by a field ℝ. We also require
that there is a split ring monomorphism \(\mathtt{/1} : ℤ \to ℝ\) with
retraction \(\mathtt{⌊\_⌋} : ℝ \to ℤ\). \(\mathtt{⌊\_⌋}\) may not be a ring
homomorphism, but it must preserve \(\le\) ordering and satisfy the floor
property:

\[
\forall x y. x < y \mathtt{/1} \implies ⌊ x ⌋ < y
\]

(ALTERNATIVE 2: ~real~ becomes rational.)

The other two AMPSL types are ~int~ and ~real~. ~int~ is encoded by the Agda
integer type. However, there is no useful Agda encoding for mathematical real
numbers. This can be approximated using the Agda rational type ℚ. Whilst this
clearly cannot encode all real numbers, it satisfies nearly all of the
properties required by the pseudocode real-number type. The only missing
operation is square-root, which is unnecessary for the proofs AMPSL is designed
for.

(END ALTERNATIVES)

*** Denotational Semantics

#+name: AMPSL-denotational-prototypes
#+caption: Function prototypes for the denotational semantics of different AMPSL
#+caption: program elements. All of them become functions from the current
#+caption: variable context into some return value.
#+begin_src agda2
expr      : Expression Σ Γ t        → ⟦ Σ ⟧ₜₛ × ⟦ Γ ⟧ₜₛ → ⟦ t ⟧ₜ
exprs     : All (Expression Σ Γ) ts → ⟦ Σ ⟧ₜₛ × ⟦ Γ ⟧ₜₛ → ⟦ ts ⟧ₜₛ
ref       : Reference Σ Γ t         → ⟦ Σ ⟧ₜₛ × ⟦ Γ ⟧ₜₛ → ⟦ t ⟧ₜ
locRef    : LocalReference Σ Γ t    → ⟦ Σ ⟧ₜₛ × ⟦ Γ ⟧ₜₛ → ⟦ t ⟧ₜ
stmt      : Statement Σ Γ           → ⟦ Σ ⟧ₜₛ × ⟦ Γ ⟧ₜₛ → ⟦ Σ ⟧ₜₛ × ⟦ Γ ⟧ₜₛ
locStmt   : LocalStatement Σ Γ      → ⟦ Σ ⟧ₜₛ × ⟦ Γ ⟧ₜₛ → ⟦ Γ ⟧ₜₛ
fun       : Function Σ Γ t          → ⟦ Σ ⟧ₜₛ × ⟦ Γ ⟧ₜₛ → ⟦ t ⟧ₜ
proc      : Procedure Σ Γ           → ⟦ Σ ⟧ₜₛ × ⟦ Γ ⟧ₜₛ → ⟦ Σ ⟧ₜₛ
#+end_src

The denotational semantics has to represent the different AMPSL program elements
as mathematical objects. In this case, due to careful design of AMPSL's syntax,
each of the elements is represented by a total function.
[[AMPSL-denotational-prototypes]] shows the prototypes of the different semantic
interpretation functions, and the full definition is in [[*AMPSL Denotational
Semantics]]. Each function accepts the current variable context as an argument.
Because the variable contexts are an ordered sequence of values of different
types, they can be encoded in the same way as tuples.

**** Expression Semantics

The semantic representation of an expression converts the current variable
context into a value with the same type as the expression. Most cases are pretty
simple. For example, addition is the sum of the values of the two subexpressions
computed recursively. One of the more interesting cases are global and local
variables, albeit this is only a lookup in the variable context for the current
value. This lookup is guaranteed to be safe due to variables being a lookup into
the current context. Despite both being a subsets of the ~Expression~ type,
~Reference~ and ~LocalReference~ require their own functions to satisfy the
demands of the termination checker.

One significant omission from this definition is the checking of evaluation
order. Due to the design choices that AMPSL functions cannot modify global state,
and that no AMPSL expression can modify state, expressions have the same value
no matter the order of evaluation for sub-expressions. This is also reflected in
the type of the denotational representation of expressions. It can only possibly
return a value and not a modified version of the state.

**** Assignment Semantics
#+name: AMPSL-denotational-assign-prototypes
#+caption: Function prototypes for the ~assign~ and ~locAssign~ helper
#+caption: functions. The arguments are the reference, new value, original
#+caption: variable context and the context to update. The original context is
#+caption: needed to evaluate expressions within the reference.
#+begin_src agda2
assign    : Reference Σ Γ t      → ⟦ t ⟧ₜ → ⟦ Σ ⟧ₜₛ × ⟦ Γ ⟧ₜₛ → ⟦ Σ ⟧ₜₛ × ⟦ Γ ⟧ₜₛ
locAssign : LocalReference Σ Γ t → ⟦ t ⟧ₜ → ⟦ Σ ⟧ₜₛ × ⟦ Γ ⟧ₜₛ → ⟦ Γ ⟧ₜₛ
#+end_src

Before considering statements as a whole, we start with assignment statements.
If assignments were only into variables, this would be a trivial update to the
relevant part of the context. However, the use of ~Reference~ makes things more
tricky. Broadly speaking, there are three types of ~Reference~: terminal
references like ~state~ and ~var~; isomorphism operations like ~unbox~, ~[_]~
and ~cast~; and projection operations like ~slice~, ~cut~, ~head~ and ~tail~.

We will consider how to update each of the three types of references in turn,
which is the action performed by helper functions ~assign~ and ~locAssign~, the
signatures of which are given in [[AMPSL-denotational-assign-prototypes]].

Terminal references are the base case and easy. Assigning into ~state~ and ~var~
updates the relevant part of the variable context. Isomorphic reference
operations are also relatively simple to assign into. First, transform the
argument using the inverse operation, and assign that into the sub-reference.
For example, the assignment ~[ ref ] ≔ v~ is the same as ~ref ≔ unbox v~.

The final type of reference to consider are the projection reference operations.
Assigning into one projection of a reference means that the other part remains
unchanged. Consider the assignment ~head r ≔ v~ as an example. This is
equivalent to ~r ≔ cons v (tail r)~, which makes it clear that the second
projection remains constant. The second projection must be computed using the
original variable context, which is achieved by only updating the context for a
leaf reference.

This interpretation of slice as a projection reference type is a large part of
the reason why AMPSL has ~merge~, ~cut~ and ~slice~ instead of the bitstring
concatenation and slicing present in the pseudocode. There is no way to form a
product-projection triple with only bitstring joining and slicing, so any
denotational semantics with these operations would require merge and cut
operations on the encoding of values.  AMPSL takes these semantic necessities
and makes them available to programmers.

~assign~ and ~locAssign~, when given a reference and initial context, return the
full and local variable contexts respectively. As ~Reference~ includes both
~state~ and ~var~, assigning into a reference can modify both global and local
references. In contrast, ~LocalReference~ only features ~var~, so can only
modify local variables.

**** Statement Semantics
Compared to assignment, the semantics of other statements are trivial to
compute. Skip statements map to the identity function and sequencing is function
composition, reflecting that they do nothing and compose statements together
respectively. As expressions cannot modify state, ~if_then_else_~ and ~if_then_~
statements become simple---evaluate the condition and both branches on the input
state, and return the branch depending on the value of the condition. Local
variable declarations are also quite simple. The initial value is computed and
added to the variable context. After evaluating the subsequent statement, the
final value of the new variable is stripped away from the context.

The only looping construct in AMPSL is the ~for~ loop. Because it performs a
fixed number of iterations, it too has easy-to-implement denotational semantics.
This is because it is effectively a fixed number of ~declare~ statements all
sequenced together. This is also one of the primary reasons why the denotational
semantics can have poor computational performance; every iteration of the ~for~
loop must be evaluated individually.

~stmt~ and ~locStmt~ return the full context and only the local variables
respectively. This is because only ~Statement~ can include ~Reference~ which can
reference global state. On the other hand, ~LocalReference~ used by
~LocalStatement~ can only refer to, and hence modify, local state.

**** Function and Procedure Semantics
Finally there are ~proc~ and ~fun~ for denoting procedures and functions. ~proc~
returns the global state only. ~Procedure~ is a thin wrapper around ~Statement~,
which modifies both local and global state. However, the local state is lost
when leaving a procedure, hence ~proc~ only returns the global part.

~fun~ behaves a lot like a ~declare~ statement. It initialises the return
variable to the given expression, then evaluates the ~LocalStatement~ body.
Unlike ~declare~, which discards the added variable upon exiting the statement,
~fun~ instead returns the value of that variable. As ~LocalStatement~ cannot
modify global state, and the other local variables are lost upon exiting the
function, only this one return value is necessary.

*** Hoare Logic Semantics
The final form of semantics specified for AMPSL is a form of Hoare logic. Unlike
the denotational semantics, which must perform a full computation, the Hoare
logic is syntax-directed; loops only require a single proof. This section starts
by explaining how a AMPSL ~Expression~ is converted into a ~Term~ for use in
Hoare logic assertions. Then the syntax and semantics of the ~Assertion~ type is
discussed before finally giving the form of correctness triples for AMPSL.

**** Converting ~Expression~ into ~Term~
As discussed in [[*Hoare Logic]], a simple language such as WHILE can use
expressions as terms in assertions directly. The only modification required is
the addition of auxiliary variables. AMPSL is not as simple a language as WHILE,
thanks to the presence of function calls in expressions. Whilst function calls
do not prevent converting expressions into terms, some care must be taken. In
particular, this conversion is only possible due to the pure nature of AMPSL
functions; it would not be possible if functions modified global variables. The
full definition of ~Term~ and its semantics are given in [[*AMPSL Hoare Logic
Definitions]].

First, a demonstration on why function calls need special care in Hoare logic.
We will work in an environment with a single Boolean-valued global variable.
Consider the following AMPSL function, a unary operator on an integer, which is
the identity when ~state 0F~ is false and otherwise performs an increment.

#+begin_src agda2
f : Function [ bool ] [ int ] int
f =
  init
    var 0F ∙
    let x = var 1F in
    let ret = var 0F in
    if state 0F
      then ret ≔ lit 1ℤ + x
  end
#+end_src

Consider the expression ~e = call f [ x ]~ of type ~Expression [ bool ] Γ int~.
There are three important aspects we need to consider for converting ~e~ into a
term: the initial conversion; substitution of variables; and the semantics.
(FIXME: why?)

The simplest conversion is to keep the function call as-is, and simply
recursively convert ~x~ into a term. This would result in a term ~e′ = call f [
x′ ]~, using ~′~ to indicate this term embedding function.

What would happen when we try and substitute ~state 0F~ for ~t~, a term
involving local variables in ~Γ~, into ~e′~? As ~f~ refers to ~state 0F~, it
must be modified in some way. However, ~Γ~ is a different variable context from
~[ int ]~, so there is no way of writing ~t~ inside of ~f~. This embedding is
not sufficient.

A working solution comes from the insight that a ~Function~ in AMPSL can only
read from global variables, and never write to them. Instead of thinking of ~f~
as a function with a set of global variables and a list of arguments, you can
consider ~f~ to be a function with two sets of arguments. In an ~Expression~,
the first set of arguments always corresponds exactly with the global variables,
so is elided. We can then define an embedding function ~↓_~, such that ~↓ e =
call f [ state 0F ] [ ↓ x ]~, and all the other expression forms as expected.
This makes the elided arguments to ~f~ explicit.

Doing a substitution on ~↓ e~ is now simple: perform the substitution on both
sets of arguments recursively, and leave ~f~ unchanged. As the first set of
arguments correspond exactly to the global variables in ~f~, the substitution
into those arguments appears like a substitution into ~f~ itself.

The last major consideration of this embedding is how to encode its semantics.
To be able to perform logical implications within Hoare logic, it is necessary
to have a semantic interpretation for assertions and thus terms. Going back to
~↓ e~, we already have a denotational semantics for ~f~. Hence we only need to
consider the global and local variables we pass to ~f~ to get the value. We
simply pass ~f~ the values of the global and local argument lists for the values
of the global and local arguments respectively. Thus ~↓ e~ is a valid conversion
from ~Expression~ to ~Term~.

The only other difference between ~Expression~ and ~Term~ is the use of
auxiliary variables within Hoare logic terms. AMPSL accomplishes this by
providing a ~meta~ constructor much like ~state~ and ~var~. This indexes into a
new auxiliary variable context, Δ, which forms part of the type definition of
~Term~.

**** Hoare Logic Assertions
An important part of Hoare logic is the assertion language used within the
correctness triples. The Hoare logic for AMPSL uses a first-order logic, which
allows for the easy proof of many logical implications at the expense of not
being complete over the full set of state properties. The full definition and
semantics of the ~Assertion~ type are in [[*AMPSL Hoare Logic Definitions]].

The ~Assertion~ type has the usual set of Boolean connectives: ~true~, ~false~,
~_∧_~, ~_∨_~, ~¬_~ and ~_→_~. When compared to the ~fin~ AMPSL expression, which
performs arbitrary manipulations on finite sets, using this fixed set of
connectives may appear restrictive. The primary reason in favour of a fixed set
of connectives is that the properties are well-defined. This makes it possible
to prove properties about the ~Assertion~ type within proofs that would not be
possible if assertions could use arbitrary connectives.

Another constructor of ~Assertion~ is ~pred~, which accepts an arbitrary
Boolean-valued ~Term~. This is the only way to test properties of the current
program state within assertions. As nearly all types have equality comparisons,
~pred~ can encode equality and inequality constraints on values. Furthermore, as
~Term~ embeds ~Expression~, many complex computations can be performed within
~pred~. To allow equality between two terms of any type, there is an ~equal~
function to construct an appropriate assertion.

The final two constructors of ~Assertion~ provide first-order quantification
over auxiliary variables. ~all~ provides universal quantification and ~some~
provides existential quantification.

Semantically, an assertion is a predicate on the current state of execution. For
AMPSL, this state is the current global, local and auxiliary variable contexts.
As is usual in Agda, the predicates are encoded as an indexed family of sets.

The Boolean connectives are represented by their usual type-theoretic
counterparts: the unit type for ~true~, the empty type for ~false~, product
types for ~_∧_~, sum types for ~_∨_~, function types for ~_→_~ and the negation
type for ~¬_~.

Quantifier assertions are also quite easy to give a semantic representation. For
universal quantification, you have a function taking values of the type of the
auxiliary variable, which returns the encoding of the inner assertion with
auxiliary context extended by this value. For existential quantification, you
instead have a dependent pair of a value with the auxiliary variable type, and
semantic encoding of the inner assertion.

The final ~Assertion~ form to consider is ~pred~. This first evaluates the
associated Boolean term. If true, the semantics returns the unit type.
Otherwise, it returns the empty type.

(FIXME: necessary?)
For a language where value equality can
have many different values, some readers may feel like reducing those equalities
to a binary result loses information. Providing this information to the user
would require a way to convert Boolean-valued terms into a normal form,
with an inequality operator at the root. This conversion would be highly
non-trivial, especially due to the presence of function calls in terms.

Fortunately, all equalities and inequalities between AMPSL values are decidable,
either by construction of the type for Booleans and finite sets, or by
specification for integers and real numbers. This allows the user to extract
Agda terms for equalities given only knowledge of whether terms are equal.

**** Correctness Triples for AMPSL
In the traditional presentation of Hoare logic ([[*Hoare Logic]]), there are two
types of rule; structural rules based on program syntax and adaptation rules to
modify preconditions and postconditions. The Hoare logic for AMPSL unifies the
two forms of rules, eliminating the need to choose which type of rule to use
next. This allows for purely syntax-directed proofs for any choice of
precondition and postcondition.

#+name: AMPSL-Hoare-logic
#+caption: The Hoare logic correctness triples for AMPSL. This combines the
#+caption: structural and adaptation rules you would find in traditional
#+caption: renderings of Hoare logic into a single set of structural rules.
#+begin_src agda2
data HoareTriple (P : Assertion Σ Γ Δ) (Q : Assertion Σ Γ Δ) :
                 Statement Σ Γ → Set (ℓsuc ℓ) where
  seq     : ∀ R → HoareTriple P R s → HoareTriple R Q s₁ → HoareTriple P Q (s ∙ s₁)
  skip    : P ⊆ Q → HoareTriple P Q skip
  assign  : P ⊆ subst Q ref (↓ val) → HoareTriple P Q (ref ≔ val)
  declare : HoareTriple
              (Var.weaken 0F P ∧ equal (var 0F) (Term.Var.weaken 0F (↓ e)))
              (Var.weaken 0F Q)
              s →
            HoareTriple P Q (declare e s)
  invoke  : let metas = All.map (Term.Meta.inject Δ) (All.tabulate meta) in
            let varsToMetas = λ P → Var.elimAll (Meta.weakenAll [] Γ P) metas in
            let termVarsToMetas =
              λ t → Term.Var.elimAll (Term.Meta.weakenAll [] Γ t) metas in
            HoareTriple
              ( varsToMetas P
              ∧ equal (↓ tup (All.tabulate var)) (termVarsToMetas (↓ tup es))
              )
              (varsToMetas Q)
              s →
            HoareTriple P Q (invoke (s ∙end) es)
  if      : HoareTriple (P ∧ pred (↓ e)) Q s →
            P ∧ pred (↓ inv e) ⊆ Q →
            HoareTriple P Q (if e then s)
  if-else : HoareTriple (P ∧ pred (↓ e)) Q s →
            HoareTriple (P ∧ pred (↓ inv e)) Q s₁ →
            HoareTriple P Q (if e then s else s₁)
  for     : (I : Assertion _ _ (fin _ ∷ _)) →
            P ⊆ Meta.elim 0F I (↓ lit 0F) →
            HoareTriple {Δ = _ ∷ Δ}
              ( Var.weaken 0F
                  (Meta.elim 1F (Meta.weaken 0F I)
                                (fin inject₁ (cons (meta 0F) nil)))
              ∧ equal (meta 0F) (var 0F)
              )
              (Var.weaken 0F
                 (Meta.elim 1F (Meta.weaken 0F I)
                               (fin suc (cons (meta 0F) nil))))
              s →
            Meta.elim 0F I (↓ lit (fromℕ m)) ⊆ Q →
            HoareTriple P Q (for m s)
#+end_src

We will now talk through each of the Hoare logic rules for AMPSL, which are
given in [[AMPSL-Hoare-logic]]. The simplest rule to consider is ~skip~.  This
immediately demonstrates how AMPSL Hoare logic combines structural and
adaptation rules. A purely structural rule for ~skip~ would be ~HoareTriple P P
skip~; the ~skip~ statement has no effect on the current state. By combining
this with the rule of consequence, a ~skip~ statement allows for logical
implication.

The ~seq~ rule is as you would expect and mirrors the SEQ rule of WHILE's Hoare
logic. The only potential surprise is that the intermediate assertion has to be
given explicitly. This is due to Agda being unable to infer the assertion ~Q~
from the numerous manipulations applied to it by the other correctness rules.

Another pair of simple rules are ~if~ and ~if-else~. In fact, the ~if-else~ rule
is identical to the corresponding Hoare logic rule from WHILE, and ~if~ only
differs by directly substituting in a ~skip~ statement for the negative branch.

The final trivial rule is ~assign~. Like the ~skip~ rule, the ~assign~ rule
combines the structural and adaptation rules of WHILE into a single Hoare logic
rule for AMPSL. A purely structural rule would have ~subst Q ref (↓ val)~ as the
precondition of the statement. AMPSL combines this with the rule of consequence
to allow for an arbitrary precondition.

The other Hoare logic rules for AMPSL are decidedly less simple. Most of the
added complexity is a consequence of AMPSL's type safety. For example, whilst it
is trivial to add a free variable to an assertion on paper, doing so in a
type-safe way for the ~Assertion~ type requires constructing a whole new Agda
term, as the variable context forms part of the type.

The ~declare~ rule is the simplest of the three remaining. The goal is to
describe a necessary triple on ~s~ such that ~HoareTriple P Q (declare e s)~ is
a valid correctness triple. First, note that ~P~ and ~Q~ have type ~Assertion Σ
Γ Δ~, whilst ~s~ has type ~Statement Σ (t ∷ Γ)~ due to the declaration
introducing a new variable. To be able to use ~P~ and ~Q~, they have to be
weakened to the type ~Assertion Σ (t ∷ Γ) Δ~, achieved by calling ~Var.weaken
0F~. We will denote the weakened forms ~P′~ and ~Q′~ for brevity. The recursive
triple we have is ~HoareTriple P′ Q′ s~. However, this does not constrain the
new variable. Thus we assert that the new variable ~var 0F~ is equal to the
initial value ~e~.  However, ~e~ has type ~Expression Σ Γ~ and we need a ~Term Σ
(t ∷ Γ) Δ~. Hence we must instead use ~Term.Var.weaken 0F (↓ e)~, denoted ~e′~ ,
which converts ~e~ to a term and introduces the new variable. This finally gives
us the triple we need: ~HoareTriple (P′ ∧ equal (var 0F) e′) Q′ s~.

I will go into less detail whilst discussing ~invoke~ and ~for~, due to an even
greater level of complexity. The ~for~ rule is the simpler case, so I will start
there. The form of the ~for~ rule was inspired from the WHILE rule for a ~while~
loop, but specialised to a form with a fixed number of iterations.

Given a ~for n s~ statement, we first choose a loop invariant ~I : Assertion Σ Γ
(fin (suc n) ∷ Δ)~. The additional auxiliary variable indicates the number of
complete iterations of the loop, from \(0\) to \(n\). We will use ~I(x)~ to
denote the assertion ~I~ with the additional auxiliary variable replaced with
term ~x~, and make weakening variable contexts implicit. We require that ~P ⊆
I(0)~ and ~I(n) ⊆ Q~ to ensure that the precondition and postcondition are an
adaptation of the loop invariant. The final part to consider is the correctness
triple for ~s~. We add in a new auxiliary variable representing the value of the
loop variable. This is necessary to ensure the current iteration number is
preserved between the precondition and postcondition, as the loop variable
itself can be modified by ~s~. We then require that the following triple holds:
~HoareTriple (I(meta 0F) ∧ equal (meta 0F) (var 0F)) I(1+ meta 0F) s~. This
ensures that ~I~ remains true across the loop iteration, for each possible value
of the loop variable.

Notice that unlike the denotational semantics, which would explicitly execute
each iteration of a loop, the Hoare logic instead requires only a single proof
term for all iterations of the loop. This is one of the primary benefits of
using Hoare logic over the denotational semantics; it has a much lower
computational cost.

The final Hoare logic rule for AMPSL is ~invoke~. Procedure invocation is tricky
in AMPSL's Hoare logic due to the changing local variable scope in the procedure
body. Of particular note, any local variables in the precondition and
postcondition for a procedure invocation cannot be accessed nor modified by the
procedure body. This is the inspiration for the form of the ~invoke~ rule.

To construct ~HoareTriple P Q (invoke (s ∙end) es)~, we first consider the form
~P~ and ~Q~ will take in a correctness triple for ~s~. Note that local variables
in ~P~ and ~Q~ are immutable within ~s~, due to the changing local variable
scope. Also note that the local variables cannot be accessed using ~var~; ~P~
and ~Q~ have type ~Assertion Σ Γ Δ~, but ~s~ has type ~Statement Σ Γ′~ for some
context ~Γ′~ independent of ~Γ~. As the original local variables are immutable
during the invocation, we can replace them with auxiliary variables, by
assigning a new auxiliary variable for each one. Within ~P~ and ~Q~, we then
replace all ~var x~ with ~meta x~ to reflect that the local variables have been
moved to auxiliary variables. This is the action performed by the ~varsToMetas~
function. Finally, we have to ensure that the local variables within the
procedure body are initially set to the invocation arguments. Like ~P~ and ~Q~,
the local variables in ~es~ have to be replaced with the corresponding auxiliary
variables. This substitution is done by ~termVarsToMetas~.

Example uses of these rules, particularly ~invoke~ and ~for~, are given in
(FIXME: forward reference).

* Properties and Evaluation of AMPSL

# For any practical projects, you should almost certainly have some kind
# of evaluation, and it's often useful to separate this out into its own
# chapter.

This chapter has two major concerns. The first is to prove that AMPSL's Hoare
logic is sound with respect to the denotational semantics. If the logic is not
sound, it is unsuitable for use in proofs. I will also discuss what steps need
to be taken to show a restricted form of completeness for AMPSL.

The other half of this chapter will give a practical example of using AMPSL to
prove a proposition. I will give the AMPSL encoding of the pseudocode form of
the Barrett reduction algorithm given by
[cite/t:@10.46586/tches.v2022.i1.482-505]. I will demonstrate how this works on
some concrete values, and explain what work is left to be done to prove a more
general statement.

** Soundness of AMPSL's Hoare Logic

#+name: AMPSL-soundness-statement
#+caption: The Adga type defining soundness of AMPSL's Hoare Logic. If there is
#+caption: a correctness triple \(\{P\}\;\texttt{s}\;\{Q\}\) then for any
#+caption: variable contexts σ, γ and δ, a proof that \(P\) holds initially
#+caption: implies that \(Q\) holds after executing \texttt{s} on the global and
#+caption: local variable contexts.
#+begin_src agda2
sound : P ⊢ s ⊢ Q →
        ∀ σ γ δ →
        Assertion.⟦ P ⟧ σ γ δ →
        uncurry Assertion.⟦ Q ⟧
          (Semantics.stmt s (σ , γ))
          δ
#+end_src

To prove that AMPSL's Hoare logic is sound, we first need to define what is
meant by soundness. [[AMPSL-soundness-statement]] shows the Agda type corresponding
to the proposition.

#+begin_theorem
Given a Hoare logic proof that \(\{P\}\;\texttt{s}\;\{Q\}\) holds, then for any
concrete instantiation of the global, local and auxiliary variable contexts, if
\(P\) holds on the initial state, \(Q\) holds on the state after evaluating
\texttt{s}.
#+end_theorem

Some cases in this inductive proof are trivial: the premise of the ~skip~ Hoare
logic rule is exactly the proof statement we need, and the ~seq~ rule can be
satisfied by composing the results of the inductive hypothesis on the two
premises. The ~if~ and ~if-else~ rules pattern match on the result of evaluating
the condition expression. Then it recurses into the true or false branch
respectively. This relies on a trivial proof that the semantics of an
~Expression~ are propositionally equal to the semantics of that expression
embedded as a ~Term~.

The ~assign~ rule is also relatively simple. Because the ~Reference~ type
excludes product references, it is sufficient to show that substituting into a
single global or local variable is sound. Due to the recursive nature of
substitution, this simply requires a propositional proof of equality for terms.

Other cases like ~declare~, ~invoke~ and ~for~ are much more complex, mostly due
to the use of helper functions like variable weakening and elimination. We take
a quick diversion into how to prove these manipulations do not affect the
semantics of terms and assertions before discussing how soundness is shown for
these more complex Hoare logic rules.

*** Proving Properties of Term and Assertion Manipulations
#+name: term-homomorphisms
#+caption: The types of all the ~Term~ homomorphisms required to define AMPSL's
#+caption: Hoare Logic. They are logically split into three groups depending on
#+caption: whether the homomorphism targets global, local or auxiliary
#+caption: variables.
#+begin_src agda2
module State where
  subst     : ∀ i → Term Σ Γ Δ t → Term Σ Γ Δ (lookup Σ i) → Term Σ Γ Δ t
module Var where
  weaken    : ∀ i → Term Σ Γ Δ t → Term Σ (insert Γ i t′) Δ t
  weakenAll : Term Σ [] Δ t → Term Σ Γ Δ t
  elim      : ∀ i → Term Σ (insert Γ i t′) Δ t → Term Σ Γ Δ t′ → Term Σ Γ Δ t
  elimAll   : Term Σ Γ Δ t → All (Term Σ ts Δ) Γ → Term Σ ts Δ t
  subst     : ∀ i → Term Σ Γ Δ t → Term Σ Γ Δ (lookup Γ i) → Term Σ Γ Δ t
module Meta where
  weaken    : ∀ i → Term Σ Γ Δ t → Term Σ Γ (insert Δ i t′) t
  weakenAll : ∀ (Δ′ : Vec Type k) (ts : Vec Type m) → Term Σ Γ (Δ′ ++ Δ) t → Term Σ Γ (Δ′ ++ ts ++ Δ) t
  inject    : ∀ (ts : Vec Type n) → Term Σ Γ Δ t → Term Σ Γ (Δ ++ ts) t
  elim      : ∀ i → Term Σ Γ (insert Δ i t′) t → Term Σ Γ Δ t′ → Term Σ Γ Δ t
#+end_src

Three out of eight of AMPSL's Hoare logic rules require manipulating the form of
terms and assertions to introduce free variables, rename existing variables, or
perform eliminations or substitutions of variables. [[term-homomorphisms]] gives the
types of each the ten homomorphisms on terms. Given that the ~Term~ type has 32
constructors, this means a naive definition would require 320 cases, each at
least a line long, and most duplicates.

This number can be greatly reduced by realising that the only interesting cases
in these homomorphisms are the constructors for variables: ~state~, ~var~ and
~meta~. By giving the action of a homomorphism on these three constructors,
you can construct the definition of a full homomorphism.

#+name: term-weakening
#+caption: A record that defines the three interesting cases for weakening a
#+caption: ~Term~ by adding a new local variable. A generic function extends a
#+caption: ~RecBuilder~ into a full term homomorphism.
#+begin_src agda2
weakenBuilder : ∀ i → RecBuilder Σ Γ Δ Σ (insert Γ i t) Δ
weakenBuilder {Γ = Γ} i = record
  { onState = state
  ; onVar   = λ j → Cast.type (Vecₚ.insert-punchIn Γ i _ j) (var (punchIn i j))
  ; onMeta  = meta
  }
#+end_src

This is best illustrated by an example. [[term-weakening]] shows how weakening local
variables can be extended to a full homomorphism by only giving the ~state~,
~var~ and ~meta~ cases. As weakening local variables only affects the ~var~
case, the ~state~ and ~meta~ cases are identities. The ~var~ case then "punches
in" the new variable, wrapped in a type cast to satisfy Agda's dependant typing.

Proving that the term manipulations are indeed homomorphisms in the semantics
also requires fewer lines than the 320 naive cases. Like with the manipulation
definitions, the proofs only need to given for the ~state~, ~var~ and ~meta~
cases. However, it is not enough for a proof to simply show that the ~state~
~var~ and ~meta~ cases are homomorphisms. The proof must also state how to
extend or reduce the variable contexts to the correct form.

#+name: term-weakening-proof
#+caption: A record that shows that ~Term.Var.weaken~ is a homomorphism that
#+caption: preserves semantics. Because the variable contexts change between the
#+caption: two sides of the homomorphism, this record has to describe how to
#+caption: extend the variable contexts first. Then it has to show the actions
#+caption: of ~Term.Var.weaken~ on global, local and auxiliary variables are
#+caption: indeed homomorphisms. A similar record type exists for homomorphisms
#+caption: that restrict the variable contexts like variable elimination.
#+begin_src agda2
weakenBuilder : ∀ i → ⟦ t ⟧ₜ → RecBuilder⇒ (Term.Var.weakenBuilder {Σ = Σ} {Γ = Γ} {Δ = Δ} {t = t} i)
weakenBuilder {t = t} {Γ = Γ} i v = record
  { onState⇒ = λ σ γ δ → σ
  ; onVar⇒   = λ σ γ δ → Core.insert′ i Γ γ v
  ; onMeta⇒  = λ σ γ δ → δ
  ; onState-iso = λ _ _ _ _ → refl
  ; onVar-iso   = onVar⇒
  ; onMeta-iso  = λ _ _ _ _ → refl
  }
  where

  onVar⇒ : ∀ j σ γ δ → _
  onVar⇒ j σ γ δ = begin
    Term.⟦ Term.Cast.type eq (var (punchIn i j)) ⟧ σ γ′ δ
      ≡⟨ Cast.type eq (var (punchIn i j)) σ γ′ δ ⟩
    subst ⟦_⟧ₜ eq (Core.fetch (punchIn i j) (insert Γ i t) γ′)
      ≡⟨ Coreₚ.fetch-punchIn Γ i t j γ v ⟩
    Core.fetch j Γ γ
      ∎
    where
    open ≡-Reasoning
    γ′ = Core.insert′ i Γ γ v
    eq = Vecₚ.insert-punchIn Γ i t j
#+end_src

Returning to the local variable weakening example, the relevant construction for
proof is shown in [[term-weakening-proof]]. First I specify how to modify the
variable contexts. The global and auxiliary variable contexts are unchanged,
whereas a value for the weakened variable is inserted into the local variable
context.  Then we prove the homomorphism is correct on each of ~state~, ~var~
and ~meta~.  As ~state~ and ~meta~ were unchanged, the proof is trivial by
reflexivity. The variable case is also quite simple, first proving that the
~Cast.type~ function is denotationally the same as a substitution, and then
showing that fetching a "punched in" index from a list with an insertion is the
same as fetching the original index from an unmodified list.

In total, these two optimisations save roughly 580 lines of Agda code in
the definition and proofs of term manipulations. However, there are still
roughly 800 lines remaining that would be difficult to reduce further.

Assertion manipulations have a similar amount of repetition as term
manipulations. However, there are two important differences that make a generic
recursion scheme unnecessarily complex. First, the ~Assertion~ type has fewer
constructors, totalling nine instead of 32. Whilst homomorphisms
will still feature a bunch of boilerplate, it occurs at a much lower ratio
compared to the amount of useful code. The second reason is that the ~all~ and
~some~ constructors introduce new auxiliary variables. This means that the
subterms of these constructors have a different type from other assertions,
making a generic solution much more complex.

Proofs that assertion manipulations are homomorphisms are also fundamentally
different that those for term homomorphisms. Whilst the denotational semantics
of a term produces the same type regardless of whether it is under homomorphism,
the denotational representation of an assertion is itself a type. In particular,
the dependent types created by the denotations of ~all~ and ~some~ assertions
are impossible to use to any useful degree in propositional equality. Instead,
I give type equalities, which are pairs of functions from one type to the other.

Only three constructors for ~Assertion~ have interesting cases in these proofs.
The ~pred~ constructor delegates the work to proofs on the ~Term~ manipulations,
using the resulting propositional equality to safely return the input term.

*** Soundness of ~declare~, ~for~ and ~invoke~
Referring back to [[AMPSL-Hoare-logic]] for the Hoare logic definitions, we can now
prove soundness for the other rules. The ~declare~ rule is quite simple. First,
we create a proof that the weakened precondition holds, and add to it a proof
that the additional variable is indeed the initial value of the newly-declared
variable. Then we recursively apply the soundness proof, to obtain a proof that
the weakened post-condition holds. Finally, we apply the weakening proof for
~Assertion~ in reverse, obtaining a proof that the postcondition holds.

The proof for ~for~ is much more involved, and only an outline will be given. I
will also reuse the syntax from [[*Correctness Triples for AMPSL]] for the
invariant.  By using the implication premises for the ~for~ Hoare logic rule, we
can obtain a proof that ~I(0)~ holds from the argument, and convert a proof of
~I(m)~ to a proof of the post-condition. All that remains is a proof that the
loop preserves the invariant.

#+name: foldl-prototype
#+caption: The function signature for proving arbitrary properties about left-folding a vector.
#+begin_src agda2
foldl⁺ : ∀ {a b c} {A : Set a} (B : ℕ → Set b) {m} →
         (P : ∀ {i : Fin (suc m)} → B (Fin.toℕ i) → Set c) →
         (f : ∀ {n} → B n → A → B (suc n)) →
         (y : B 0) →
         (xs : Vec A m) →
         (∀ {i} {x} →
          P {Fin.inject₁ i} x →
          P {suc i}
            (subst B (Finₚ.toℕ-inject₁ (suc i))
                     (f x (Vec.lookup xs i)))) →
         P {0F} y →
         P {Fin.fromℕ m}
           (subst B (sym (Finₚ.toℕ-fromℕ m))
                    (Vec.foldl B f y xs))
#+end_src

To do this, I first had to prove a much more general statement about the action
of left-fold on Agda's ~Vec~ type, the prototype of which is given in
[[foldl-prototype]]. In summary, given a proof of ~P~ for the base case, and a proof
that each step of the fold preserves ~P~, then it shows that ~P~ holds in for
the entire fold.

This means that the remainder of the proof of soundness of ~for~ is a proof that
each iteration maintains the invariant. Using a number of lemma asserting that
various manipulations of assertions are homomorphisms, as well as a few
type-safe substitutions and a recursive proof of soundness for the iterated
statement, the final proof of soundness for ~for~ totals around 220 lines of
Agda.

Unfortunately, the proof of soundness for ~invoke~ is currently incomplete, due
to time constraints for the project. The proof itself should be simpler than the
proof for the ~for~ rule, as the ~invoke~ rule uses fewer ~Assertion~
manipulations. Whilst each individual step in the rule is trivial, writing them
formally takes a considerable amount of time.

*** Argument for a Proof of Correctness
A general proof of correctness of the AMPSL Hoare logic rules for any predicate
on the input and output states is impossible within Agda. There are a large
class of predicate that fall outside the scope of what can be created using the
~Assertion~ type.  Additionally, even if a predicate could be the denotational
representation of an assertion, there is no algorithm to decide the assertion
given the predicate, due to the ~Set~ type in Agda not being a data type.

Due to this, any statement about correctness must be given the precondition and
postcondition assertions explicitly. This results in the following Agda type for
the most general proof of correctness:

#+begin_src agda2
-- impossible to prove
correct : (∀ σ γ δ →
           Assertion.⟦ P ⟧ σ γ δ →
           uncurry Assertion.⟦ Q ⟧
             (Semantics.stmt s (σ , γ))
             δ) →
          P ⊢ s ⊢ Q
#+end_src

Unfortunately this is formulation also very quickly runs into a problem in Agda.
Consider the statement ~s ∙ s₁~. To prove this in AMPSL's Hoare logic, we need
to give two subproofs: ~P ⊢ s ⊢ R~ and ~R ⊢ s₁ ⊢ Q~. As input, we have a single
function transforming proofs of the precondition to proofs of the postcondition.
The problem occurs because there is no way to decompose this function into two
parts, one for the first statement and another for the second.

To resolve this, I anticipate that proving correctness in AMPSL's Hoare logic
will require the following steps:

1. Construction of a function ~wp : Statement Σ Γ → Assertion Σ Γ Δ → Assertion
   Σ Γ Δ~ that computes the weakest precondition of an assertion.
2. A proof that for all statements ~s~ and assertions ~P~, ~wp s P ⊢ s ⊢ P~ is
   satisfiable.
3. A proof that for all statements ~s~ and assertions ~P~ and ~Q~, ~P ⊢ s ⊢ Q~
   implies ~P ⊆ wp s Q~.
4. A proof that the rule of consequence is derivable from the other AMPSL Hoare
   logic rules.

The first three steps form the definition of the weakest precondition for an
assertion: step one asserts that such an assertion exists for all statements
and assertions; step two asserts that the assertion is indeed a valid
precondition for the choice of statement and postcondition; and step three
asserts that any other precondition for ~s~ that derives ~Q~ must entail the
weakest precondition.

With the additional step of proving the rule of consequence as a meta rule, we
can now give this formulation for the correctness of AMPSL's Hoare logic, which
follows trivially from the four steps above:

#+begin_src agda2
correct : (∀ σ γ δ →
           Assertion.⟦ P ⟧ σ γ δ →
           Assertion.⟦ wp s Q ⟧ σ γ δ) →
          P ⊢ s ⊢ Q
#+end_src

Constructing the weakest preconditions from an ~Assertion~ and ~Statement~
should be a relatively simple recursion. I will sketch the ~if_then_else~,
~invoke~ and ~for~ cases. For ~if e then s else s₁~, we can recursively
construct the weakest preconditions ~P~ and ~P₁~ for ~s~ and ~s₁~ respectively.
The weakest precondition of the full statement will then be ~P ∧ e ∨ P₁ ∧ inv
e~.

To find the weakest precondition of a function invocation ~invoke (s ∙end) es~
and ~Q~, first find the weakest precondition of ~s~ and ~Q′~ , where ~Q′~ is the
result of replacing local variables in ~Q~ with auxiliary variables in the same
manner as the ~invoke~ AMPSL Hoare logic rule. Then, apply the inverse
transformation to the auxiliary variables, and finally replace occurrences of
the procedure-local variables with the arguments.

(FIXME: describe how to compute weakest precondition of ~for~)

** Using AMPSL for Proofs

This chapter will describe how I converted the Arm pseudocode representing a
Barrett reduction implementation [cite:@10.46586/tches.v2022.i1.482-505] into
AMPSL, focusing on the modelling choices I made. I will then discuss how to use
the AMPSL code in concrete proofs for specific values, before concluding with
the steps necessary to abstract the proof to arbitrary values.

The most significant modelling decisions are the omissions of the ~TopLevel~
[cite:@arm/DDI0553B.s §E2.1.400] and the ~InstructionExecute~
[cite:@arm/DDI0553B.s §E2.1.225] pseudocode functions.  ~TopLevel~ primarily
deals with debugging, halt and lockup processor states, none of which are
relevant for the Barrett reduction or NTT correctness proofs I am working
towards. ~InstructionExecute~ deals with fetching instructions and deciding
whether to execute an instruction "beatwise" or linearly.

Most vector instructions for the Armv8.1-M architecture are executed beatwise. A
vector register is a 128-bit value with four 32-bit lanes. Beatwise execution
allows some lanes to anticipate future vector instructions and execute them
before a previous instruction has finished on other lanes [cite:@arm/DDI0553B.s
§B5.4]. There are additional conditions on which instructions can be
anticipated, essentially boiling down to any order that has the same result as
executing the instructions linearly.

#+name: ExecBeats-impl
#+caption: A side-by-side comparison of a simplified form of the ~ExecBeats~
#+caption: function from [cite:@arm/DDI0553B.s §E2.1.121] versus the model used
#+caption: in AMPSL. (FIXME: figure padding).
#+begin_figure
\begin{subfigure}[b]{0.45\textwidth}
\begin{verbatim}
boolean ExecBeats()
  newBeatComplete = BeatComplete
  _InstId = instId;
  _CurrentInstrExecState = GetInstrExecState(instId);
  InstStateCheck(ThisInstr());
  for beatInTick = 0 to BEATS_PER_TICK-1
    beatId = beatInTick
    beatFlagIdx = (instId * MAX_BEATS) + beatId;
    if newBeatComplete[beatFlagIdx] == '0' then
      _BeatId          = beatId;
      _AdvanceVPTState = TRUE;
      cond             = DefaultCond();
      DecodeExecute(
        ThisInstr(),
        ThisInstrAddr(),
        ThisInstrLength() == 2,
        cond);
      newBeatComplete[beatFlagIdx] = '1';
      if _AdvanceVPTState then
        VPTAdvance(beatId);
  commitState =
    newBeatComplete[MAX_BEATS-1:0] ==
    Ones(MAX_BEATS);
  if commitState then
    newBeatComplete =
      LSR(newBeatComplete, MAX_BEATS);
  BeatComplete = newBeatComplete
  return commitState;
\end{verbatim}
\caption{Arm pseudocode}
\label{ExecBeats-impl-Arm}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\begin{minted}{agda}
ExecBeats : Procedure State [] →
            Procedure State []
ExecBeats DecodeExec =
  for 4 (
    let beatId = var 0F in
    BeatId ≔ beatId ∙
    AdvanceVPTState ≔ lit true ∙
    invoke DecodeExec [] ∙
    if ! AdvanceVPTState
    then
      invoke VPTAdvance (beatId ∷ []))
  ∙end
\end{minted}
\caption{AMPSL model}
\label{ExecBeats-impl-AMPSL}
\end{subfigure}
#+end_figure

The choice of which instructions beats to schedule is in the ~ExecBeats~
pseudocode function [cite:@arm/DDI0553B.s §E2.1.121]. Compared with my model,
shown side-by-side in [[ExecBeats-impl]], I reduce the scheduling part to a linear
order where all beats of a beatwise instruction are executed in a tick.

Another pseudocode function I have decided to omit is  ~DecodeExecute~. This
performs instruction decoding as specified in Chapter C2 of
[cite/t:@arm/DDI0553B.s §E2.1.121], and then performs the execution step
specified further down the instruction descriptions. I instead decided to give
~ExecBeats~ a procedure that performs the execution of a single instruction.

The two instructions needed to model the Barrett reduction implementation
algorithm by [cite/t:@10.46586/tches.v2022.i1.482-505] are ~VQMLRDH~ and ~VMLA~,
given in (FIXME: figure) along with my AMPSL model counterparts. Both procedures
end with a loop that copies the masked bytes of an intermediate result into the
destination register.  This is the action performed by the ~copyMasked~
procedure given back in [[*Example AMPSL Statements]].

The final AMPSL procedure used to calculate Barrett reduction is in (FIXME:
figure). As Barrett reduction is performed with a fixed positive base, the
procedure takes the base as a non-zero Agda natural number.

This definition was tested by using the following snippet, instantiating
the ~int~ and ~real~ types with Agda integers and rationals respectively.

#+begin_src
do-barrett : (n : ℕ) →
             (zs : Vec ℤ 4) →
             Statement State []
do-barrett n zs =
  for 4 (
    Q[ lit 0F , var 0F ] ≔
      call (sliceⁱ 0) (index (lit zs) (var 0F) ∷ [])) ∙
  invoke (barrett n 1F 0F 0F) []

barrett-101 : Statement State []
barrett-101 = do-barrett 101 (+ 1 ∷ + 7387 ∷ + 102 ∷ - 7473 ∷ [])
#+end_src

Asking Agda to normalise the ~barrett-101~ value, which expands the function
definitions to produce a single ~Statement~, results in a 16KB ~Statement~. When
I tried to evaluate this denotationally, Agda crashed after 45 minutes.

Despite this example being relatively small, the poor performance of AMPSL's
denotational semantics highlights the necessity of the syntax-directed Hoare
logic proof system. Using the Hoare logic rules to attempt to directly prove
that barrett-101 has the desired effect leads to a very tedious proof of
expanding out the whole derivation tree.

*** Proving Reusable Results
One fundamental principle of programming is DRY: don't repeat yourself. This is
achieved by using functions and procedures to abstract out common behaviours.
Similarly, to fully utilise the power of Hoare logic, an abstract reusable
correctness triple should be given for the behaviour of invoking functions.

I attempted to do this for the ~copyMasked~ procedure given in [[*Example AMPSL
Statements]], the type of which is given below:

#+begin_src agda2
copyMasked-mask-true : ∀ {i v beat mask} {P Q : Assertion State Γ Δ} →
                    P ⊆ equal (↓ mask) (lit (replicate (lift Bool.true))) →
                    P ⊆ Assertion.subst Q Q[ i , beat ] (↓ v) →
                    P ⊢ invoke copyMasked (i ∷ v ∷ beat ∷ mask ∷ []) ⊢ Q
#+end_src

Explained briefly, whenever the mask is all true (~I~), the procedure effectively
reduces to a regular assignment rule in for AMPSL's Hoare logic. Expanding the
proof derivation results in the following Agda term:

#+begin_src agda2
invoke
  (for
    {!!}
    {!!}
    (if
      (assign {!!})
      {!!})
    {!!})
#+end_src

The holes correspond to a choice of loop invariant and then four logical
implications: entering the loop; leaving the loop; showing the assignment
preserves the loop invariant; and showing that skipping the assignment preserves
the loop invariant.

Whilst none of those steps are particularly tricky, they each require the proofs
of many trivial-on-paper lemma. Due to the time constraints of the project, I
have been unable to complete these proofs.

* Proof of Barrett Reduction
Barrett reduction is an algorithm to find a small representative of a value
\(z\) modulo some base \(n\). Instead of having to perform expensive integer
division, Barrett reduction instead uses an approximation function to precompute
a coefficient \(m = \llbracket 2^k / n \rrbracket\). The integer division \(z /
n\) is then approximated by the value \(\left\llbracket \frac{zm}{2^k}
\right\rrbracket\).

There are many choices of function that are suitable for the two approximations.
[cite/t:@10.1007/3-540-47721-7_24] used the floor function in both cases,
whereas the Barrett reduction implementation by
[cite/t:@10.46586/tches.v2022.i1.482-505] instead uses \(\llbracket z \rrbracket
= 2 \left\lfloor \frac{z}{2} \right\rfloor\). Work by
[cite/t:@10.46586/tches.v2022.i1.211-244] proves results for regular rounding at
runtime, but any \ldquo{}integer approximation\rdquo{} for precomputing the
coefficient \(m\).

The simplest form of Barrett reduction is that of Barrett, using two floor
approximations. Thus this is the version for which I have produced my initial
proof.

Unlike the previous authors, who all dealt explicitly with integers and
rationals, I instead proved a more abstract result for an arbitrary commutative
ordered ring \(ℤ\) and ordered field \(ℝ\) with a homomorphism \(\cdot/1 : ℤ
\to ℝ\) and a floor function \(\lfloor\cdot\rfloor : ℝ \to ℤ\) that is /not
necessarily/ a homomorphism.

This decision will eventually allow for the direct use of this result in
abstract proofs about the AMPSL Barrett reduction algorithm. This is due to the
choice of AMPSL's type models for ~int~ and ~real~ as abstract structures,
discussed in [[*AMPSL Datatype Models]].

One major time sink for this abstraction was the complete lack of support from
preexisting Agda proofs. Ordered structures like the rings and fields required
are not present in the Agda standard library version 1.7, and the
discoverability of other Agda libraries is lacking. Thus much work was spent
encoding these structures and proving many trivial lemma about them, such as
sign-preservation, monotonicity and cancelling proofs.

#+name: barrett-properties
#+caption: Three properties I was able to prove about flooring Barrett reduction
#+caption: for an abstract ordered ring and field.
#+begin_src agda2
barrett-mods     : ∀ z → ∃ λ a → barrett z + a * n ≈ z
barrett-positive : ∀ {z} → z ≥ 0ℤ → barrett z ≥ 0ℤ
barrett-limit    : ∀ {z} → 0ℤ ≤ z → z ≤ 2ℤ ^ k → barrett z < 2 × n
#+end_src

In total I was able to prove three important properties of the flooring variants
of Barrett reduction, listed using Agda in [[barrett-properties]]. The first
property states that Barrett reduction does indeed perform a modulo reduction.
The second ensures that the Barrett reduction of a positive value is remains
positive. The final property states that for sufficiently small values of \(z\),
Barrett reduction produces a representable no more than twice the size of the
base.

* Summary and Conclusions

# As you might imagine: summarizes the dissertation, and draws any
# conclusions. Depending on the length of your work, and how well you
# write, you may not need a summary here.

# You will generally want to draw some conclusions, and point to
# potential future work.

#+latex: \label{lastcontentpage}

#+latex: %TC:ignore

#+print_bibliography:

\appendix

* AMPSL Syntax Definition
* AMPSL Denotational Semantics
* AMPSL Hoare Logic Definitions

#+latex: \label{lastpage}
#+latex: %TC:endignore

#  LocalWords:  AMPSL Hoare NTT PQC structs bitstring bitstrings