![]() At the centre of most galaxies there is a massive black hole. These black holes are very heavy – their mass can be from a million to over a billion times the mass of the Sun and, as such, are appropriately known as supermassive black holes. As galaxies move around in the Universe, they will sometimes merge. When this happens, the supermassive black holes they host tend to migrate toward each other and form a binary system. As these two black holes orbit each other, they warp the fabric of space and time around them and produce gravitational waves which ripple out into the Universe. These gravitational waves complete one full oscillation every year or so as they travel through space and are classified as low frequency gravitational waves. The Universe is full of these supermassive black hole binary systems, and the gravitational waves they emit fill space, combining to form something known as the stochastic gravitational wave background. Scientists are trying to find a gravitational wave signal from this background using a complex network of radio telescopes called a pulsar timing array but it could be years before there is a confirmed detection. For this reason, cosmological simulations are often used to predict what this gravitational wave signal could look like. This type of simulation helps scientists understand the structure and history of the Universe by tracking the flow of matter and energy from a time soon after the Big Bang, up until today. A team of researchers led by postgraduate researcher Bailey Sykes (from Monash University), alongside several OzGrav scientists, including OzGrav Associate Investigator Dr Hannah Middleton, have recently made a new prediction for the strength of this gravitational wave signal. The new estimate is based on data from the MassiveBlack-II simulation, which simulates a massive region of space similar to a chunk of our own Universe. The team made two estimates: one in which the supermassive black holes merge almost instantly once their host galaxies collide, and another in which the two black holes take time to sink towards each other once they pair up in a binary system. This second estimate is important as the gravitational wave output of a binary can change during this time due to the interactions of stars and gas nearby the supermassive binary. The simulated gravitational wave signal using MassiveBlack-II is similar to other predictions in previous studies. It’s smaller than a signal currently detectable by pulsar timing arrays; however, as the sensitivity of telescope technology increases over time, it’s possible a confirmed detection could be just around the corner. The results from the study add valuable insights to existing signal predictions and provide an important reference point for future pulsar timing arrays. Progressively more accurate estimates of the stochastic gravitational wave background can be used to further understand other astrophysical phenomena, including the interactions of stars and gas which impact merging supermassive black holes. Written by Bailey Sykes (Monash University)
0 Comments
Australian researchers shape the future of photonic sensing with spin-off company Vai Photonics24/5/2022 In 2021, Australian researchers Lyle Roberts and James Spollard, from The Australian National University (ANU), co-founded Vai Photonics: a spin-off company developing patented photonic sensors for precision navigation. The ARC Centres of Excellence for Engineered Quantum Systems (EQUS) and Gravitational Wave Discovery (OzGrav) played key roles in kickstarting Vai Photonics by providing seed funding towards fundamental LiDAR research, which translated to real-world, industry applications. Now, Advanced Navigation, one of the world’s most ambitious innovators in AI robotics and navigation technology, has announced the acquisition of Vai Photonics with aims to commercialise Roberts and Spollard’s research into exciting autonomous and robotic applications across land, air, sea and space. “The technology Vai Photonics is developing will be of huge importance to the emerging autonomy revolution. The synergies, shared vision and collaborative potential we see between Vai Photonics and Advanced Navigation will enable us to be at the absolute forefront of robotic and autonomy-driven technologies,” said Xavier Orr, CEO and co-founder of Advanced Navigation. Vai Photonics co-founder James Spollard explained: “Precision navigation when GPS is unavailable or unreliable is a major challenge in the development of autonomous systems. Our emerging photonic sensing technology will enable positioning and navigation that is orders of magnitude more stable and precise than existing solutions in these environments. “By combining laser interferometry and electro-optics with advanced signal processing algorithms and real-time software, we can measure how fast a vehicle is moving in three dimensions,” said Spollard. “As a result, we can accurately measure how the vehicle is moving through the environment, and from this infer where the vehicle is located with great precision.” The technology, which has been in development for over 15 years at ANU, will solve complex autonomy challenges across aerospace, automotive, weather and space exploration, as well as railways and logistics. EQUS Director Professor Andrew White applauded the initiative and determination shown by Lyle and James. “Lyle and James are perfect examples of researchers achieving useful outcomes by utilising the funds, mentoring, and guidance available through EQUS’s Translation Research Program, to help pursue the real-world impacts that our research can deliver. These two are what Australia’s research future looks like,” said White. OzGrav Director Professor Matthew Bailes said he was thrilled to see such a positive outcome for our early career researchers that were supported by OzGrav's industry seeding scheme and workshops. "It reinforces the fact that pushing the limits of instrumentation for scientific purposes can often create opportunities for Australian innovators and industry," said Bailes. Professor Brian Schmidt, Vice-Chancellor of the Australian National University said: “Vai Photonics is another great ANU example of how you take fundamental research – the type of thinking that pushes the boundaries of what we know – and turn it into products and technologies that power our lives. “The work that underpins Vai Photonics’ advanced autonomous navigation systems stems from the search for elusive gravitational waves – ripples in space and time caused by massive cosmic events like black holes colliding. “The team have built on a decade of research and development across advanced and ultra-precise laser measurements, digital signals and quantum optics to build their innovative navigation technology. We are proud to have backed Vai Photonics through our Centre for Gravitational Astrophysics and business and commercialisation office. It’s really exciting to see the team take another major step in their incredible journey.” Co-founder Dr Lyle Roberts looks forward to an autonomous future: “This is a huge win for the Vai Photonics team – together with Advanced Navigation we are able to bring our product to market much faster than originally planned. We now have access to leading research and development facilities along with strong distribution channels. We couldn’t have asked for a better outcome and look forward to navigating the future with Advanced Navigation.” This acquisition fits into Advanced Navigation’s larger growth strategy to expand its product and solutions portfolio across deep technology fields that look to solve the world’s greatest challenges facing the autonomy revolution. The acquisition was finalised in April 2022, subject to typical closing conditions. The Vai Photonics team has been integrated into Advanced Navigation’s research and development team, based out of the new Canberra research facility. This article is an amended extract from the original article written by Laura Hayward published on www.advancednavigation.com Research highlight: Deep Follow-up of GW151226 - an ordinary binary or a low-mass ratio merger?17/5/2022 Now that we've been detecting gravitational waves (GWs), we'd like to better understand the systems that generate GWs. The GWs found so far have been from collisions of celestial bodies, like black holes and neutron stars. Once we have detected a GW, we use "Bayesian Inference" to deduce the masses and spins of the objects that shot off the GW (to understand inference, check this video by 3blue1brown). Then we can use our mass and spin deductions to answer: where do these bodies exist in the Universe? Are these colliding bodies huddled together in galaxies or isolated in space? But, it gets tricky to answer such questions if our deductions of the masses and spins are incorrect! So, in my recent study, I have built a "deep follow-up" tool to determine which masses and spins better describe a given GW event. I have used this deep follow-up tool to study the "boxing-day" gravitational wave, GW151226. Initial work deduced that this GW was from the merger of two black holes (BHs), both with standard masses and spins (case A). However, recent work has deduced that the GW might have originated from a strange system: one BH could be much larger than the other and with a faster spin (case B)! A diagram representing these cases can be seen below on the right side of Figure 1. ![]() Figure 1: GW151226's two personalities. (Left) The initial and new Bayesian inference results are plotted in orange and blue, respectively. We perform a deep follow-up on the pinned points, cases A and B. (Right) Illustrative versions of what cases A and B represent. Note: black hole cartoons inspired by NASA’s Field guide to black holes. The "deep follow-up" method involves drilling into these cases to determine which binary BH system better describes the GW. First, we pin down some deduced properties of the merging black hole system, such as the mass-ratio q (the ratio of smaller BH mass divided by the bigger BH mass) and xeff (the effective spin of the binary in the z-direction). The pinned value for the initial and new results is on the left side of Figure 1. We then use Bayesian inference at these pinned values. The output allows us to compare case A and case B. We find that both the standard (case A) and irregular (case B) black hole pairs can describe GW151226, giving the event something like a dual-identity! This dual-identity gives GW151226 much more character than initially considered. For example, we initially believed that GW151226 came from an isolated black hole pair. However, a BH pair from case B is more likely to be found at the centre of an active galaxy! So, finally, I wonder: are there other GW events with split personalities? Hopefully, our deep follow-up method will be able to settle these questions. Written by OzGrav researcher Avi Vajpeyi, Monash University. Research highlight: Do massive-star models from various simulations give the same predictions?17/5/2022 Less than one percent of stars in a galaxy are formed with masses exceeding ten solar masses. Despite their rarity, massive stars are believed to play a crucial role in shaping their surroundings, ultimately determining the evolution of the star cluster or galaxy they are located in. Simulations of massive stars are used in many fields of astrophysics, from predicting gravitational-wave event rates to studying star formation and star cluster evolution. However, their rarity and short lives, along with their more extreme properties, mean that the evolution of massive stars is riddled with many uncertainties. These uncertainties are compounded by the fact that accurate modeling of stellar lives in three dimensions is prohibitively expensive in terms of computing resources. Therefore, stellar evolution is modeled using one-dimensional (1D) codes, with only radius or mass as the spatial coordinate. Three-dimensional (3D) processes such as rotation and mixing are approximated using 1D analogs, which generally give good results for most stars. However, in the envelopes of massive stars (and in low-mass stars at the late stages of evolution), the use of these 1D analogs can lead to numerical challenges for stellar evolution codes. The time steps of the computation become very small (of the order of days) and 1D codes struggle to compute the further evolution of the star. While researchers are trying to find the solution using multidimensional models, 1D stellar evolution codes adopt different pragmatic methods to push the evolution of stars beyond these numerical challenges. These methods, along with other uncertain parameters in the evolution of massive stars, can significantly alter the predictions of massive stellar models. To get an idea of how different their predictions can be, we examined models of massive stars from five different datasets, each computed using a different 1D code. We found that certain aspects of these predictions were extremely sensitive to the modeling assumptions employed by different codes. For example, in Figure 2, the different sets of massive star models show a variation of about 20 solar masses in their predictions of the mass of the black hole formed. We also found huge differences in the radial evolution of these stellar models and thus the ionizing radiation produced by them. These differences can directly affect binary evolution and the simulations of stellar environments, such as galaxies. Lasers support certain structures of light called ‘eigenmodes’. An international collaboration of gravitational wave, metasurface and photonics experts have pioneered a new method to measure the amount of these eigenmodes with unprecedented sensitivity. In gravitational wave detectors, several pairs of mirrors are used to increase the amount of laser light stored along the massive arms of the detector. However, each of these pairs has small distortions that scatters light away from the perfect shape of the laser beam. This scattering can cause excess noise in the detector, limiting sensitivity and taking the detector offline. From the recently submitted study, Prof Freise (from Vrije Universiteit Amsterdam) says: “Gravitational wave detectors like LIGO, Virgo and KAGRA store enormous amount of optical power – in this work, we wanted to test an idea that would let us zoom in on the laser beam and look for the small wiggles in power that can limit the detectors’ sensitivity.”Lasers support certain structures of light called ‘eigenmodes’. An international collaboration of gravitational wave, metasurface and photonics experts have pioneered a new method to measure the amount of these eigenmodes with unprecedented sensitivity. A similar problem is encountered in the telecoms industry where scientists want to use multiple eigenmodes to transport more data down optical fibres. OzGrav researcher and lead author Dr Aaron Jones (The University of Western Australia) explains: “Telecoms scientists have developed a way to measure the eigenmodes using a simple apparatus, but it’s not sensitive enough for our purposes. We had the idea to use a metasurface and reached out to collaborators who could help us fabricate one.” In the study, the proof-of-concept setup the team developed was over 1000x more sensitive than the original way developed by the telecoms scientists. The researchers will now look to translate this work into gravitational wave detectors, where the additional precision will be used to probe the interiors of neutron stars and test fundamental limits of general relativity. OzGrav Chief Investigator, Prof Zhao (from University of Western Australia) says: “Solving the mode sensing problem in future gravitational wave detectors is essential, if we are to understand the insides of neutron stars.” Written by Dr Aaron Jones (The University of Western Australia). Double neutron star (DNS) systems in tight orbits are fantastic laboratories to test Einstein's general theory of relativity. The first such DNS system, commonly known as Hulse-Taylor binary pulsar, provided the first indirect evidence of the existence of gravitational waves and the impetus to build LIGO. Since then, discovering such binary systems has been a major impetus for large scale pulsar surveys. Although over 3000 pulsars have been discovered in our Galaxy, we have only found 20 DNS systems. Why are they so rare? DNS systems are the endpoints of complex and exotic binary stellar evolution. In the standard model, the two stars must survive multiple stages of mass transfer, including common envelope phases, and not one but two supernova explosions. Prior to the second supernova, the survival of the binary depends on the kicks imparted by the second supernova explosion and the amount of matter ejected. It appears that it’s quite rare for binaries to survive all of these events. Those that do leave behind many insights into binary stellar evolution. Finding binary pulsars is more difficult than solitary ones. Acceleration makes their pure tones evolve in time due to the changing Doppler shifts, greatly increasing the complexity of the searches and the amount of computational time required. Fortunately, OzGrav scientists have access to the OzSTAR supercomputer at Swinburne University of Technology with its graphics processing accelerators (GPUs). We use OzSTAR to search the High Time Resolution Universe South Low Latitude pulsar survey (HTRU-S LowLat) for accelerated pulsars. In our recently accepted paper, we have presented the discovery and results from 1.5 years of dedicated timing of a new DNS system, PSR J1325-6253 using the Parkes 64m radio telescope (now also known as Murriyang). By timing when the pulses arrived at Earth, we found that PSR J1325-6253 is in a small orbit of 1.81 d. Its orbit deviates from a circularity with one of the lowest orbital eccentricities known for a DNS system (e=0.064). The elliptical orbit advances its point of closest approach (periastron) to its companion star as predicted by the theory of general relativity. The advance of periastron enabled us to determine the total mass of the system, and we found it near that of other DNS systems. The low eccentricity of the orbit meant that there was almost no mass loss in the final supernova explosion beyond the energy carried off in neutrinos, and that it was a so-called ultra-stripped supernova. Such supernovae would be very sub-luminous, and usually invisible if too far from the Sun. This rare find provided a new insight into how stars explode, and the neutron stars they leave behind. Written by OzGrav PhD student Rahul Sengar, Swinburne University of Technology https://ui.adsabs.harvard.edu/abs/2022MNRAS.tmp..798S/abstract One promising source of gravitational waves, not yet detected, is rapidly rotating neutron stars. Neutron stars are hyperdense leftovers from stellar evolution, formed from the core of stars of a certain weight class (not too light, not too heavy). Instead of collapsing all the way to a black hole, they stop just short, ultimately packing the mass of the Sun into a ball about 10 kilometers across. Neutron stars are known to spin rapidly, up to hundreds of revolutions per second, and they are so fantastically dense that even a small (millimeters high!) mountain will emit continuous gravitational waves (CWs) that are potentially detectable by LIGO.
However, detecting these gravitational waves is no mean feat. Although they are continuously emitted (as opposed to gravitational waves from merging neutron stars and black holes, which last no longer than a few minutes), they are very quiet, and digging these signals out of the noise is very challenging. The task is complicated by the fact that we often have to search over a wide range of gravitational wave frequencies and sky locations, since we do not know where a gravitational wave-emitting neutron star might be in the sky, or how fast it might be spinning. All of these facts combine to create a computational challenge which is formidable – many searches for these continuous gravitational waves are limited by the available computing power. This motivates us to make these searches as computationally efficient as possible, and to take advantage of all resources available. One important resource which has so far been under-utilised in CW searches is graphics processing units (GPUs). Although initially designed, as their name suggests, for crunching numbers in service of producing 3D graphics, over the last twenty years they have proven themselves to be equally useful in many scientific applications, often providing significant speedups over CPUs. Most supercomputing clusters are now equipped with some number of high-powered GPUs for exactly this reason. Our recent paper [1] presents the implementation of one very common method used in CW searches, the “F-statistic”, on GPUs. We show that, using our implementation, one GPU can do the work of 10–100 CPU cores, unlocking a significant new source of computational power to be used in analyses using the F-statistic. We also show that achieving these speeds does not require sacrificing sensitivity, which is extremely important given the faintness of the signal we’re looking for. Finally, as a demonstration of the utility of this new implementation in a real-world context we run a small search for continuous gravitational waves from four recently discovered neutron stars spinning between 200 and 400 times per second. The search consumes 17 hours of GPU time, in contrast to the 1000 hours of CPU time which would have been required to run the equivalent search. This work will allow more CW searches to take advantage of the computing power offered by GPUs in the future and continue to push towards the first detection of continuous gravitational waves. [1] https://dx.doi.org/10.1088/1361-6382/ac4616 Written by OzGrav PhD student Liam Dunn, the University of Melbourne. Gravitational-waves are ripples in space-time created by distant astronomical objects and detected by large complex detectors (like LIGO, Virgo, and KAGRA). Finding gravitational-wave signals in detector data is a complicated task requiring advanced signal processing techniques and supercomputing resources. Due to this complexity, explaining gravitational-wave searches in the undergraduate laboratory is difficult, especially because live demonstration using a gravitational-wave detector or supercomputer is not possible. Through simplification and analogy, table-top demonstrations are effective in explaining these searches and techniques.
A team of OzGrav scientists, across multiple institutions and disciplines, have designed a table-top demonstration with data analysis examples to explain gravitational-wave searches and signal processing techniques. The demonstration can be used as a teaching tool in both physics and engineering undergraduate laboratories and is to be published in the American Journal of Physics. Link to preprint here. Lead author of the project James Gardner (who was an OzGrav undergraduate student at the University of Melbourne during the project and now a postgraduate researcher at the Australian National University) explains: “This demonstration offers some charming insights into a live field of research that students like me should appreciate for its recency compared to the age of most ideas they encounter”. Table-top gravitational-wave demonstrations Gravitational wave detectors are very complicated and huge — laser light is sent down tubes kilometres long! But the workings of a gravitational-wave detector can be demonstrated using table-top equipment. Researchers at the University of Adelaide have developed AMIGO to do just that! Deeksha Beniwal, co-author of this study and an OzGrav PhD student at the University of Adelaide explains: “With AMIGO, the portable interferometer, we can easily share how LIGO uses the fundamental properties of light to detect ripples from the most distant reaches of the universe.” This work expands on the portable interferometer demonstration with a selection of examples for students in both physics and electrical engineering. Changrong Liu, co-author of this study and an OzGrav PhD student in electrical engineering at the University of Melbourne, explains: “This project offers a great opportunity for electrical engineering students like me to put some of their knowledge into the real and exciting physical world”. Explaining the hunt for continuous gravitational waves To demonstrate searching for signals with the table-top set up, the team first needed to make some fake signals to find! This is where the analogy of sound comes in: audio signals are used to mimic gravitational waves interacting with the detector. The team focused on demonstrating the hunt for continuous gravitational waves, a type of gravitational wave that hasn’t been detected yet. Hannah Middleton, co-author of the study and an OzGrav Associate Investigator (at the University of Birmingham), explains: “Continuous waves are long-lasting signals from spinning neutron stars. These signals should be present in the detector data all the time, but the challenge is to find them. This demonstration is directly inspired by the techniques developed by OzGrav physicists and electrical engineers in the hunt for continuous gravitational waves!“ A continuous wave signal can be slowly changing in frequency, so the audio signals used in this demonstration also change in frequency. ”We show, through using sound as an analogue to gravitational waves, what it takes to detect a wandering tone: a long signal that slowly changes pitch like whalesong,” explains Gardner. Prof. Andrew Melatos, co-author of this study and leader of the OzGrav-Melbourne node explains: “We hope that undergraduate educators will emphasize the cross-disciplinary spirit of the project and use it as an opportunity to speak more broadly to students about careers at the intersection of physics and engineering. The future is very bright career-wise for students with experience in cross-disciplinary collaboration” Written by OzGrav Assoc. Investigator Hannah Middleton (University of Birmingham) and OzGrav postgrad researcher James Gardner (ANU). Scientists from the ARC Centre of Excellence for Gravitational Wave Discovery and the University of Cologne (Germany) have developed new simulations of stars’ complicated lives, boosting research on how new stars are born and how old stars die. These stellar evolution simulations, called the BoOST project, can be used to predict how often gravitational waves should be detected—gravitational waves (ripples in space-time) are expected to happen when the death throes of two stars merge. The project can also help to study the birth of new stars out of dense clouds in space. Not all stars are the same. Sure, they all look like tiny, shining points on the sky, but it's only because they are all so far away from us. We only see stars that are close and bright enough. The rest, we may see with telescopes. If you use a telescope to measure the colour of a star, it turns out that some stars are rather red, some are blue, and some are in between. And if you measure their brightness, it turns out that some are brighter than others. This is because a star’s colour and brightness depend on its heaviness and age, among other things. It's a complex theory that has been developing since the age of the first computer simulations in the 1950's. Today, we have computer simulations that can predict how a star lives its complicated life, from birth until death. This is called 'stellar evolution' and applies to the stars that are close enough for us to observe with telescopes. But there are stars so far away that even the largest telescopes can’t view them clearly; there are stars hiding inside thick clouds (yes, such clouds exist in space); and there are dead and dying stars that used to exist once upon a time. Is there a way to study these unreachable stars to observe similarities and differences from those that we can actually see? Stellar evolution simulations can help here because we can simulate any star—even the stars we can’t see. For example, stars that were born soon after the Big Bang used to have a different chemical composition than those stars that we see today. From computer simulations, we can figure out how these early stars looked like: their colour, brightness etc. What's more, we can even predict what happens to them after they die. Some of them become black holes, for example, and we can tell the mass of this black hole based on how heavy the star had been before it exploded. And this presents more opportunity for discovery! For example, it’s possible to predict how often two black holes merge. This gives us statistics about how many times we can expect to detect gravitational waves from various cosmic epochs. Or, when trying to understand how stars are born out of dense clouds, we can count the number of hot bright stars and the number of exploding stars around these cloudy regions. Both hot bright stars and explosions change the clouds' structure and influence the birth of new stars in delicate ways. ![]() The BoOST project predicts how stars live their lives. These diagnostic diagrams show stellar evolution simulations of massive and very massive stars (colourful labels in solar mass units). These are stellar lives in the Milky Way (left), in the Small Magellanic Cloud (middle) and in a metal-poor dwarf galaxy (right). One line on these diagrams belongs to one star’s whole life from birth to death. Their brightness is shown to change on the vertical axis, and their apparent ‘colour’ (surface temperature, with lower values meaning red and higher, blue) on the horizontal axis. These simulations can give a boost to research on how new stars are born and how old stars die. Lead scientist on the study Dorottya Szécsi from the University of Cologne says: ‘Much like the theory of stellar life got a boost in the 1950's from computerization, we hope our BoOST project will contribute to other research fields, because both the birth of new stars and the ultimate fate of old stars depend on how stars live their complicated and very interesting lives’. “Given the importance of massive stars in astrophysics, from determining star formation rates to the production of compact remnants, it is essential that our theoretical models of stars keep pace with advancements in observations,” says OzGrav postdoctoral researcher and study co-author Poojan Agrawal. Link to paper: https://www.aanda.org/articles/aa/full_html/2022/02/aa41536-21/aa41536-21.html Pulsars, a class of neutron stars, are extremely predictable stars. They are formed from the hearts of massive stars that have since collapsed in on themselves, no longer able to burn enough fuel to fend off the crushing gravity the star possesses. If the conditions are right, the star will continue to collapse in on itself until what’s left is a remnant of what was there before, usually only about the size of the Melbourne CBD, but 1-2 times as heavy as our Sun, making these some of the densest objects in the Universe.
These stars don’t produce much visible light, but from their magnetic poles, they emit surprisingly bright beams of radio waves. If we’re lucky, as the star rotates, those beams will wash over the Earth and we observe ‘pulses’. While most pulsars spin around in about a second, there is a subclass of these stars that spin around in just a few thousandths of a second—they’re called ‘millisecond’ pulsars. Observing the pulses from these millisecond pulsars gives physicists clues to many questions, including testing General Relativity and understanding the densest states of matter. But one of the main goals of observing these incredibly fast, dense stars is to detect ultra-long wavelength gravitational waves. And by long, we mean many light-years long. These gravitational waves distort space-time between us and the pulsars, causing the pulses to arrive earlier or later than expected. It’s likely that these gravitational waves come from a background produced by all the binary supermassive black holes in the Universe, which form from galaxies crashing into one another. As part of OzGrav, we try and detect this gravitational wave background by looking at collections of the most predictable stars (called pulsar timing arrays) and measuring how they change over time. We did this by using the world’s most sensitive radio telescopes, including the Australian Murriyang telescope (also known as the Parkes telescope) and the ultra-sensitive MeerKAT array telescope in South Africa. But it’s not quite that simple. From our observations with MeerKAT we found that the most precisely timed (read: predictable) pulsar, J1909-3744, was misbehaving. We found that the pulses were changing shape, with bright pulses arriving earlier and narrower than faint ones. This lead to greater uncertainty in its predicted emission. Fortunately, we were able to establish a method to account for this change and time tag the pulsar more precisely than ever before. This method could be of use for other pulsars and will be important when more advanced telescopes are available in the future. Written by OzGrav PhD student Matthew Miles, Swinburne University |
|