Bachelor's DegreesBachelor of ArtsBachelor of EngineeringDual-Degree ProgramUndergraduate Admissions
Doctoral DegreesDoctor of PhilosophyPhD Innovation ProgramPhD + Doctor of MedicineGraduate Admissions
All Thayer Events
Nash, Nyquist, Networks, Neural Networks, and… Next?
12:00pm - 1:00pm ET
Meeting ID: 967 0890 8163
Our world is becoming increasingly more data-driven and more data-intensive by the day, and the need to handle such deluge of data (images, videos, texts, etc.) and make sense of them is more urgent than ever. Resurgence of AI (artificial intelligence) and ML (machine learning) research of the past decade after the years of the so-called AI winter is in part motivated by such a need, though output of such ML algorithms themselves often generate even more data (chatbots, Tik-Tok videos, etc.) with more and more intelligent agents (human and otherwise) interacting with each other either cooperatively or adversarially over such data.
Understanding the shape of data (along with information and even intelligence therein contained) may help. Standing tall among the major mathematical achievements of 20th century are some theorems whose subsequent impacts have far outweighed their original intent. One such theorem is due to John Nash, whose proof of the existence of equilibrium in a non-cooperative game gave rise to the concept of the eponymous Nash Equilibrium, which in many ways revolutionized the field of economics. Less known but equally impactful also by Nash is Nash embedding theorem, which states that every Riemannian manifold can be isometrically embedded into some high dimensional Euclidean space. Yet another is due to Harry Nyquist, whose Nyquist-Shannon sampling theorem, which states that every time-varying band-limited signal can be perfectly reconstructed from a sequence of samples acquired at the twice rate of its maximum frequency, laid the foundation of the modern information and communication theory. Furthermore, when the data can be described as a large-scale dense graph, Szemeredi’s regularity lemma has proved to be a powerful tool for revealing the structures of such graphs (in terms of partitions), despite being called just a “lemma”...
I will describe some recent progress in extending these results, for example, in dynamic game theory, where the rules of the game change over time, and in the theory of compressive sensing, which guarantees perfect reconstruction of signals from far fewer number of samples than required by the Nyquist theorem, if the signals are sparse in some appropriate domain. I will then describe some applications of these extensions for machine learning, brain imaging, community detection, etc. and finally hint at some potential connection between some of these these important theorems.
About the Speaker(s)
Research Professor, BU
Professor Chin directs the LISP (Learning, Intelligence + Signal Processing) group in the computer science department at Boston University, where he and his students are researching questions such as “Can Intelligence be learned?” at the intersection of signal processing, machine learning, game theory, extremal graph theory, and computational neuroscience.
He has also held the positions of Chief Scientist at Systems & Technology Research (STR), Chief Scientist – Decision Systems at Draper Laboratory, and Senior Technical Director at BBN in Cambridge, MA. Before moving back to the New England in 2013, he was a co-director of DSP group in the Electrical and Computer Engineering (ECE) Department at Johns Hopkins University and a Chief Scientist in Cyber Technology Branch at Johns Hopkins Applied Physics Laboratory. He was a visiting fellow of London Institute of Mathematical Sciences and has held visiting positions at Tufts University (CS), Harvard University (Center of Mathematical Applications) and MIT (Dept of Brain and Cognitive Science). He’s currently an associate editor of IEEE Transactions on Computational Social Systems, and has served as conference co–chair of the annual SPIE/DSS Conference on Cyber Sensing, and symposium chair in GlobalSIP conference.
Since completing his PhD for developing differential geometric methods to understand Einstein’s field equations, he has been passionate about developing geometric and topological methods to learn and understand information in general—signals (neural, RF, images, videos, hyper-spectral, etc.), graphs (social networks, communication networks, etc.) and human interactions via game theory. Most of his research is being (and has been) supported by NSF, NIH, AFOSR, DARPA, ODNI, ONR, OSD, and others and has been published at conferences such as NeurIPS, ICASSP, ISIT, etc. and journals such as Science Advances, IEEE Transactions and Journal of Machine Learning, etc.
Chin is a Phi Beta Kappa graduate of Duke University where he was a triple major in computer science, math and electrical engineering. He received his PhD in mathematics from MIT.
For more information, contact Ashley Parker at firstname.lastname@example.org.