.mbtTOC{border:5px solid #f7f0b8;box-shadow:1px 1px 0 #EDE396;background-color:#FFFFE0;color:#707037;line-height:1.4em;margin:30px auto;padding:20px 30px 20px 10px;font-family:oswald,arial;display:block;width:70%}.mbtTOC ol,.mbtTOC ul{margin:0;padding:0}.mbtTOC ul{list-style:none}.mbtTOC ol li,.mbtTOC ul li{padding:15px 0 0;margin:0 0 0 30px;font-size:15px}.mbtTOC a{color:#0080ff;text-decoration:none}.mbtTOC a:hover{text-decoration:underline}.mbtTOC button{background:#FFFFE0;font-family:oswald,arial;font-size:20px;position:relative;outline:none;cursor:pointer;border:none;color:#707037;padding:0 0 0 15px}.mbtTOC button:after{content:"\f0dc";font-family:FontAwesome;position:relative;left:10px;font-size:20px}

Search

Thursday 25 June 2020

Mathematical quests for cosmology and astrophysics

Mathematical quests for cosmology and astrophysics

 

 

Since in science we know what we think we know (see relevant previous publication), when we talk about scientific issues, we must, in general, bear in mind that every theory structured to explain the hitherto unexplained phenomena is based on a model or scenario, through which answers to unanswered questions are sought.

In particular, with regard to sciences such as physics/astrophysics and cosmology, the majority of experts set out as a starting point a fundamental idea of how to achieve answers/solutions to emerging problems for which explanation is sought.

One could begin to describe suggestions and ideas that man formulated in one way or another in a communicative way, even from the time when he took those first steps as a rational being, in his attempt to understand the phenomena around him.

In modern physics, discoveries were always supported by observation and experiment. But what is the experiment? Well, nothing but a repeated remark. Let us take as an example the knowledge we have about the problem of the establishment of our material world. Starting, therefore, the "scientist" man, in the above explained concept (observation + experiment), from the well-known Mendeleev’s Periodic Table of Elements, he then proceeded to interpret the idea of the existence of atoms. Thomson's plum pudding model proved to be incorrect and we moved to the Ratherford's atom with the nucleus in the center and electrons rotating around it. However, this also led to distortion, since the electrons with the loss of energy (as charges in accelerated motion) eventually, even very soon, fall towards the central core. That way no atom could survive for long. Then Bohr came, who with his well-known quantum theory, offered a solution to this problem, somewhat. This was followed by the rapid development of Quantum Mechanics, which provides answers and a better understanding of the function of the atom.

Then another idea came: the unification of the four fundamental dynamic fields of nature. This idea has become the central problem of physics worldwide. The theories developed to find answers to this problem are based on increasingly advanced and complex mathematics, such as the so-called "Standard Model", versions for the "Great Unified Theory (GUT)" and the "Theory of Everything (TOE)". However, gravity could not be unified with the other three fundamental forces/interactions (strong nuclear, weak nuclear and electromagnetic) and therefore new models came as suggestions from the scientific community, such as "Supersymmetry" and "Supergravity". It should be noted that, even through these theories, the unification of Quantum Mechanics with the General Theory of Relativity has not yet been achieved. Thus, even through these theories, the ultimately sought explanation has not yet been obtained.

Not to appear completely pessimistic, let us point out that the so-called "Superstrings" is a new model, which is considered that in some version it could give the final answer. In essence, as scientists have not been able to calculate, in theory, what will happen when two gravitons (the carrier particles of gravitational interaction) collide with each other, since the calculations showed energy tending to infinity in a small space – a sign of weakness of the hitherto known mathematics – they turned to the invention of the strings. Thus, according to the models, where the strings are assumed, only strings can collide and be reflected in a clear and mathematically explainable way, without resulting in weak, from the physics point of view, infinitations. However, so far, even though strong mathematical minds worldwide are working on this subject, the results are not yet successful. Some argue that this theory should be waiting for the discovery of new, more advanced mathematics to solve its equations.

That being the case in the context of mathematical quests to support the models of cosmology and physics/astrophysics and stating from the outset our humility towards the strong minds of the planet, we quote, below, some thoughts, with the aim of enriching the approach of unresolved problems of physics. These thoughts come in the wake of the above review of the progress of the searches made to date to date and are based on mathematical documentation.




Question 1 and "subversive" for this approach: Is it possible that the cosmological models to date are not sufficient? Does mathematical research to support such theoretical models have no future?

Where were we left in the above review? In the string-dependent models. Well, are these invented entities, the strings, which resemble their physical properties with the normal strings of our real world, but only that these ones grow in more than four dimensions, not constitute the right choice? Therefore, given the mathematical difficulties of solving string equations, would another entity be a more solid but also mathematically more accessible basis for supporting a cosmological model?

Our personal thought goes to the idea of White Holes and even for the needs of mathematicalisation of the model but also of giving further degrees of freedom regarding the development of an integrated cosmological model, let us consider that these are not holes of large dimensions. On the contrary, let's imagine White (not Black) Holes in the sub-Planck space, i.e. for scales (where, ~ 6,6 x 10-34 m2kg/s is the known Planck's constant), in other words for lengths of 1,6 x 10-35 m, or times 5,4 x 10-44 s, or masses 2,2 x 10-8 kg, or temperatures of 1,4 x 1032 Κ (~ -272 x 1032 oC), or electrical charge 1,9 x 10-18 C. These holes can be called Mini White Holes (MWH).




Perhaps, clarification is also needed on the distinction between White and Black Holes. In the sense of the Theory of Relativity, Black Hole in space-time is the area where for each entity (including light) entry is possible but escape impossible. Correspondingly, White Hole is the area of space-time where for each entity (including light) entry is impossible but escape possible.

The advantage of the White Holes approach instead of Strings may have already been realised and this is the fact that the White Holes and indeed those of the sub-Planckian space, i.e. the Mini White Holes, could be the most probabilistic generators of the universes/universe.

There is one more additional key advantage in the approach involving these entities (MWH), that the mathematical search for the behavior of matter within them is perfectly workable by solving differential equations by replacing the numerical values of the sub-Planckian space. The aim is to determine the probability of an event within such a hole, i.e. within the sub-Planckian space.

For this event to be evidence of the generator status we assumed for the MWH it is sufficient to mathematically determine the probability of a quantity of mass emerging through such a hole and then being transited to the quantum level of the normal (i.e. the post-Big Bang) of the universal space, which we experience.

Well, the first mathematical calculations (i.e. solving the one-dimensional Schrödinger's relativistic equation 

with initial assumption P = a2|Ψ|2) indicate that the probability (let it be P) of an event such as the above (emergence of a mass quantity from MWH and transition to the normal universal space) is over time of exponential form with high adaptability control R2 = 0,999, i.e.

P = f(t) = a tb

where

a ~ 3,66

b ~ 1,96

Is it a coincidence or possibly an unknown law determines the development of the probability in the sub-Planckian space and obliges it to follow a regression of an exponential form? It remains to be answered with further study and perhaps at some point in the future with experimental data.




Though this approach does not only have positive features. It has also got unanswered questions. For example, would it not be reasonable to wonder that, if the above apply and various small (partial) universes emerge from the MWH, does this mass insertion not put a problem in the Schwarzschild radius of the Great (known) Universe? This is, therefore, about a continuous expansion (not inflation).

But, so be it, this is a first approach, which can duly be subject to future amendments and revisions, combined with any experimental data, if and when this is possible by the science of the future.

Literature Review

  • ‘The Aspect of Information in the Process of Observation’, H. Atmanspacher, Foundations of Physics, 1989
  • ‘The First Three Minutes’, S. Weinberg, Basic Books Inc., N.Y., 1977
  • ‘Introduction to Elementary Particles’, D. Griffiths, N.Y., 1987
  • ‘Six Impossible Things’, J. Gribbin, Icon Books Ltd, 2019
  • ‘Superstrings – A theory of everything’, P. Davies & J. Brown, Cambridge University Press, 1988
  • ‘Principles of Cosmology and Gravitation’, M. Berry, Cambridge University Press, 1976
  • ‘Black Holes’, J. Taylor, Fontana-Collins, 1974
  • ‘A Brief History of Time – From the Big Bang to Black Holes’, S. Hawking, Bantam Books, 1988                                                                  
  • Lost in Math: How Beauty Leads Physics Astray’, S. Hossenfelder, Basic Books, N.Y., 2020 

Friday 20 March 2020

CROSSWORD PUZZLE 20.03.2020



20.03.2020



A
B
C
D
E
F
G
1







2







3







4







5







6







7











ACROSS
1.        In a wall or in the economy means quite the same.
2.        It may defines a project for which the owner and the operator is a single body (initialism) – The American Orthodox Church of Russian origin (initialism)
3.        You do this to the bottle if you drink heavily.
4.        The spirits in Greek.
5.        Modifying a text or a computer file.
6.        Best place to buy royalty-free photos on the web – A US state in the Pacific Region
7.        A free software environment for statistical computing and graphics – The ancient Egyptian god of the earth

DOWN
A.      The use of it marked a 2000-year period of the human prehistory.
B.       Alias the drunkard.
C.       When added to the beginning of a word inverts the meaning – A serf, mainly in feudal servitude
D.      A computer programming language for structured programming – The research and training body of the European trade union organisation (spelled backwards)
E.       Worlds in Greek – A well-known forum of twenty.
F.        If you do so, you risk.
G.      The same with the 1st clue in “C” – There are such of Oxford style (spelled backwards)

Wednesday 19 February 2020

The "prisoner's dilemma": A proposal to amend the model to be implemented on international relations strategy


The "prisoner's dilemma": A proposal to amend the model to be implemented on international relations strategy





On the occasion of a video – among other similar ones – circulated on the internet regarding the problem of game theory, known as "the prisoner’s dilemma", an attempt to explain the problem is summarised below, but, moreover, a proposal to extend it, is also attempted towards the implementation of its forecasts, on international relations strategies.

The dilemma (= option of a dual trait), described in the video, is in fact a paradox of analytical calculus, in terms of the conducive part of the analysis, i.e. the decision-making. So, what have we got here? Two individuals, each with personal interests, ultimately (as we usually say at the end of the day) do not achieve the best for them. So they do not succeed it, as they choose, each for themselves, their own salvation and the burden of the other. The result is that they are both in a worse situation than they would have found if they cooperated in the decision-making process! That is to say, the game theory (i.e., optimal decision of independent players participating in a backdrop with various strategic options) of an amended version.

Well, the theoretical approach is good, an approach of numerical analysis – in a broad sense – as we put it above, but it might be appropriate to be more practical.

In this light, on the most practical view, we could extend the (numerical) analysis of an amended model of the problem in question, i.e. the 'prisoner dilemma' and start discussing a realistic approach to a similar problem, of another level: the 'problem of international cooperation'.
States are by definition the players of the international political scene. Consequently, each state, in a role of that kind, aims to profit over the others regardless of its willingness/preference to cooperate (or not). In the conventional "prisoner's dilemma" the analysis does not go into the players' expected gains according to their preferences or the potential/capacity of the players in impeding (vetoing) international cooperation. This is precisely where the conventional problem of the "prisoner’s dilemma" needs the above mentioned amendment, so that its analysis also provides for the preferences of the players-states gains and the profits that correspond to them regarding the management of impeding (or not) the so-called international cooperation.

A typical example of the implementation of this latest case of play (i.e. the management of international cooperation) is, nowadays, the movements of some leaders such as the President of the US,  D. Trump (see refusal to participate in international cooperation on various issues, such as climate change, international trade and business deals, etc.), French President E. Macron (see moves in the opposite of the "player" Trump’s moves, which contribute to international cooperation on the aforementioned issues or even more, discovering, highlighting and throwing at the table of the international game new issues of international cooperation, etc.). Other international leaders, representing players-states of a smaller scope, are working on some similar moves, such as our well-known and non-exceptional Turkish R T Erdogan, who in the early period of his career "played" by promoting international cooperation while later on and still at the current juncture "playing" on the contrary, i.e. with the so-called "adverse" options, blocking international cooperation even within some by definition international cooperative organisations (see NATO, OSCE, etc.).




On the basis of the above analysis of both the conventional standard of the "prisoner’s dilemma" and its proposed amendment/expansion into international relations between states-players, it is obvious that, the application – at least if not the theoretical numerical analysis of the model too – of the amendment/extension of the model into international relations, which we here proudly and urgently attempt to explain, has long been discovered/analysed by the advisors and/or some strategic analysis think tanks, which support the aforementioned leaders and not only those. And, of course, the leaders have not taken away the opportunity to move forward with the implementation!


Friday 24 January 2020

Viability of reinforced concrete





Reinforced concrete has always been regarded as a material that "forgets" to age, i.e. when reference is made to its effective functional lifetime. Although this view is, to a great extent, true, the external environment can undoubtedly shorten the life expectancy of reinforced concrete, when for the prolongation of its acceptable functionality, costly repairs are created and there is, in some degree, disruption of everyday life (of the users and the material).




When most people think of concrete as an extremely durable material, the Pantheon in Rome comes immediately in their minds. Actually, it is a construction that has been in operation for well over 2000 years. If, then, the Roman concrete can last for centuries, surely today's construction materials should have a design lifetime (under the technical/engineering meaning) of 100 years. In a way, the assumptions made about the viability of the material may have influenced earlier specifications, which, however, fail to adequately describe its performance over time.




As our understanding of the sustainability of materials increases, it is generally customary to expect increased performance. For example, in the case of major structures (e.g. milestones, monumental, etc.), the desire for them is to be preserved indefinitely. The ultimate goal, then, must be an approach of producing construction materials, which will allow the construction of major projects with an extended lifespan, such as the above mentioned Pantheon.

A huge amount of research on the resilience of concrete was conducted from the 1970’s to the 1990’s and achieved great technical knowledge. It is now possible to exploit this information so as to provide a level of confidence to the owners of reinforced concrete structures.

The truth is that, as early as the 70’s, there has been a lot of research in the area of understanding the viability of reinforced concrete. Each relevant research approach is based on the standardisation of the structure capacity over time. Thus, the most common probabilistic models of this kind provide for a time nearly linear increase in probability of failure up to 60% (approximately), where the turning point corresponds to 43 years (approximately), towards a probability of 95% (approximately) corresponding to 100 years.

For the forecasts of these probabilistic models, of course, there is an important role of external factors related to the macro and micro climate as well as to the quality of the materials used, since this generally varies. It is therefore understood that both the capacity of the materials and their functional lifetime should be subject of a rather stochastic evaluation.

Based on the above, it would be appropriate to clarify a few concepts concerning the technical approach of the viability of the structures:

·         Functional Capacity and Deterioration

The functional capacity of a structure usually means its assessment on the basis of its functionality, as it relates to the use of the structure. By extension, the functional capacity refers to basic operating parameters of the structure, such as design durability, stability, safety, morphology, etc. The functional capacity is usually studied as a quantifiable property of structures in relation to time. Therefore, functional structure means that, which satisfies the purpose for which it was studied and constructed.

Accordingly, deterioration means impairment of functional capacity in relation to time and may reasonably be regarded as the inverse of the functional capacity. Therefore, measurement of deterioration allows for assessing any functional capacity problems. This finding, in turn, implies that the functional capacity threshold arises on the basis of the determination of an acceptable deterioration ceiling. These are the so-called sustainability limits. Of course, as in mathematics, when limit conditions are established, the above mentioned limits may be determined in a way that is related to either an absolute level or a level of functionality corresponding to an acceptable level of maintenance. Thus, the maintenance time of the structure is determined and consequently the functional requirements for it.

It is obvious that the viability of reinforced concrete depends on the viability of its two main components, namely concrete and reinforcement (steel). The interoperability of these two basic materials is the main prerequisite for the viability of reinforced concrete structures. If, in other words, there are defects in the initial fabric of reinforced concrete or if the materials selected have quality problems, as well, if there is a (non-design) unfavorable charge that favors the collapse, deterioration is guaranteed. Also, the environment within which these materials have been installed and operate decisively influences the functional capacity and respectively the deterioration of the structure.



·          Functional Lifetime

The estimation of the functional lifetime of the materials may be done either through their anticipated lifetime or the acceptable maintenance period. As functional lifetime may be expressed in 3 ways, i.e. technical, functional or economical, it is obvious that relevant criteria for evaluation of use are required. For example, the estimation of the functional lifetime of a structure from an investment point of view is done through techno-economical analyses, concerning the maintainability and reliability of the operation of the structure.

Functional lifetime and maintenance are concepts completely correlated as, in any case, some maintenance procedures are performed during the operating time of a structure. For this reason, maintenance work that affects the functional lifetime, deserve due attention. It is therefore understood that this finding changes the definition of functional lifetime, in which the maintenance condition should be added, i.e. a phrase of the type: "...if and as long as the construction is maintained systematically ".

It is, of course, up to the so-called Master of the Project (MoP) or in any case the owner of the structure – in the broadest sense of the term chosen for this text – the definition of operational and sustainability requirements, something that ultimately defines the functional lifetime.




·         Probability of Failure

When a functional lifetime has been defined, the stochastic viability planning should include the determination of the maximum probability corresponding to the avoidance of a marginal situation. Such borderline situations can be either the final marginal situation or the marginal state of satisfaction of acceptable functionality.

There are two types of failure: viability failure and mechanical failure (e.g. bending, buckling, hammering, creep, loosening, thermal shock, fatigue, corrosion, cracking). However, for a material, the failure of viability is essentially responsible for the failure due to a mechanical cause.

In ordinary mathematical models the assessment of the failure hazard arises by multiplying the probability of failure with the quantified deterioration measured.

The determination of the probability of failure is based on social, economic and environmental criteria. For the social criteria, the essential is the importance of the structure and the consequences of failure as they endanger human lives. For the economic criteria, the additional – compared to the construction cost – economic consequences of the situation created because of the failure are examined.

For environmental/ecological criteria, the assessment is based on environmental problems caused or the circumvention of ecological principles.

The estimation of the probability of failure is applied both at the stage of the study of new structures and in existing structures. In the second case, of course, the safety tolerances are smaller compared to the first case.




·         Viability Design

Conceptually, viability design is based on safety, as the structure must effectively address the various hazards to which it is exposed. Surely safety is systematically examined by the application of the laws of Mechanics. However, during the design stage, the examination of the behaviour of construction materials has a broader view. Why? Because, precisely, what matters are the above mentioned concepts, of viability and operational lifetime of the structure.

Introducing the time factor in the design, it becomes possible to study the deterioration of materials, which is, of course, a part of the whole problem of the viability of the structures. Based on this, the time functional approach, the desired behaviour requirements of the structure are set out and they must be met in the long term, with a view to safety.

In this context, for reinforced concrete structures, the main – there is further finer segregation in subcategories – categories, which are examined are:
*      Freezing/Thawing
*      Influence of sulfate (S), mainly anion SO ²-
*      Contact with water
*      Protection of reinforcement against corrosion

·         The Environment

What is required for the optimal design it is a thorough study of the characteristics of the environment within which a structure will be set in operation, i.e. the construction materials will be called to be exposed.

To this end, in the generally applicable concrete specifications, as regards the risk of failure due to environmental conditions, various (classified) cases regarding exposure to environmental factors are included and they are assigned to classified categories of failure risk, as follows:

            *      Zero failure risk
            *      Failure caused due to carbonation
            *      Failure caused by chlorides not associated with seawater
            *      Failure caused by seawater chlorides
            *      Freezing/Thawing with or without external de-icing agents

Based on the above classifications, it is obvious that any cases concerning environmental or corrosive factors, which may coexist, are examined individually. Therefore, in the study of structures, the design durability includes taking into account the combined effect of such risk factors on concrete. The criterion of risk acceptance (so-called design tolerances), which will ultimately determine the viability of the concrete is nothing else than the cost.

In any case, experience has shown that the most serious risk to the viability of the concrete is related to the failure of the (integrated) reinforcement, which may well cause damage to the concrete surrounding it. The repair of such faults is always very costly and causes the so-called – from a techno-economic point of view – "indirect" costs.




Conclusion

The overall approach that can lead to a guarantee of the viability of reinforced concrete structures (must) include the thorough study of the (constantly changing) environment in which the structure will be "exposed" and the proper production of reinforced concrete, based on the study of materials (concrete, reinforcement), the maturation, the workability and quality controls. Then, during operation, the role of monitoring of cracking and proper maintenance is important, of course. Given the application of the above approach, the viability of the concrete and the structure made of this material is guaranteed.