What is the difference between a zero operator, zero function, zero scalar, and zero vector?
$begingroup$
I'm pretty sure that a zero vector is just a vector of length zero with direction, zero scalar is just the number zero, and that a zero function is any function that maps to zero. Not entirely sure what exactly a zero operator is however.
linear-algebra soft-question terminology
New contributor
$endgroup$
|
show 1 more comment
$begingroup$
I'm pretty sure that a zero vector is just a vector of length zero with direction, zero scalar is just the number zero, and that a zero function is any function that maps to zero. Not entirely sure what exactly a zero operator is however.
linear-algebra soft-question terminology
New contributor
$endgroup$
2
$begingroup$
So two vectors of length zero and different directions are... equal? Thinking of vectors in terms of length and direction is a very misguided idea which doesn't work in mathematics.
$endgroup$
– Asaf Karagila♦
18 hours ago
2
$begingroup$
@Asaf it basically works in an inner product space as long as you make sure that you only have one vector of length zero (whether it has all directions or no direction probably doesn't matter)
$endgroup$
– Mark S.
15 hours ago
3
$begingroup$
@Mark: I know that. My point is that "vector is direction and length" is a bad intuition about vectors.
$endgroup$
– Asaf Karagila♦
14 hours ago
3
$begingroup$
@AsafKaragilan How should one think about vectors then?
$endgroup$
– Arlene
11 hours ago
1
$begingroup$
@Arlene vectors are just a convenient way to store numbers. They are useful in 2d/3d geometry to think about them as direction & magnitude, but a 0 length vector in such geometry HAS no direction, because in order for the direction to be meaningful, it must have a distance.
$endgroup$
– UKMonkey
9 hours ago
|
show 1 more comment
$begingroup$
I'm pretty sure that a zero vector is just a vector of length zero with direction, zero scalar is just the number zero, and that a zero function is any function that maps to zero. Not entirely sure what exactly a zero operator is however.
linear-algebra soft-question terminology
New contributor
$endgroup$
I'm pretty sure that a zero vector is just a vector of length zero with direction, zero scalar is just the number zero, and that a zero function is any function that maps to zero. Not entirely sure what exactly a zero operator is however.
linear-algebra soft-question terminology
linear-algebra soft-question terminology
New contributor
New contributor
edited yesterday
J. W. Tanner
2,1021117
2,1021117
New contributor
asked yesterday
ArleneArlene
384
384
New contributor
New contributor
2
$begingroup$
So two vectors of length zero and different directions are... equal? Thinking of vectors in terms of length and direction is a very misguided idea which doesn't work in mathematics.
$endgroup$
– Asaf Karagila♦
18 hours ago
2
$begingroup$
@Asaf it basically works in an inner product space as long as you make sure that you only have one vector of length zero (whether it has all directions or no direction probably doesn't matter)
$endgroup$
– Mark S.
15 hours ago
3
$begingroup$
@Mark: I know that. My point is that "vector is direction and length" is a bad intuition about vectors.
$endgroup$
– Asaf Karagila♦
14 hours ago
3
$begingroup$
@AsafKaragilan How should one think about vectors then?
$endgroup$
– Arlene
11 hours ago
1
$begingroup$
@Arlene vectors are just a convenient way to store numbers. They are useful in 2d/3d geometry to think about them as direction & magnitude, but a 0 length vector in such geometry HAS no direction, because in order for the direction to be meaningful, it must have a distance.
$endgroup$
– UKMonkey
9 hours ago
|
show 1 more comment
2
$begingroup$
So two vectors of length zero and different directions are... equal? Thinking of vectors in terms of length and direction is a very misguided idea which doesn't work in mathematics.
$endgroup$
– Asaf Karagila♦
18 hours ago
2
$begingroup$
@Asaf it basically works in an inner product space as long as you make sure that you only have one vector of length zero (whether it has all directions or no direction probably doesn't matter)
$endgroup$
– Mark S.
15 hours ago
3
$begingroup$
@Mark: I know that. My point is that "vector is direction and length" is a bad intuition about vectors.
$endgroup$
– Asaf Karagila♦
14 hours ago
3
$begingroup$
@AsafKaragilan How should one think about vectors then?
$endgroup$
– Arlene
11 hours ago
1
$begingroup$
@Arlene vectors are just a convenient way to store numbers. They are useful in 2d/3d geometry to think about them as direction & magnitude, but a 0 length vector in such geometry HAS no direction, because in order for the direction to be meaningful, it must have a distance.
$endgroup$
– UKMonkey
9 hours ago
2
2
$begingroup$
So two vectors of length zero and different directions are... equal? Thinking of vectors in terms of length and direction is a very misguided idea which doesn't work in mathematics.
$endgroup$
– Asaf Karagila♦
18 hours ago
$begingroup$
So two vectors of length zero and different directions are... equal? Thinking of vectors in terms of length and direction is a very misguided idea which doesn't work in mathematics.
$endgroup$
– Asaf Karagila♦
18 hours ago
2
2
$begingroup$
@Asaf it basically works in an inner product space as long as you make sure that you only have one vector of length zero (whether it has all directions or no direction probably doesn't matter)
$endgroup$
– Mark S.
15 hours ago
$begingroup$
@Asaf it basically works in an inner product space as long as you make sure that you only have one vector of length zero (whether it has all directions or no direction probably doesn't matter)
$endgroup$
– Mark S.
15 hours ago
3
3
$begingroup$
@Mark: I know that. My point is that "vector is direction and length" is a bad intuition about vectors.
$endgroup$
– Asaf Karagila♦
14 hours ago
$begingroup$
@Mark: I know that. My point is that "vector is direction and length" is a bad intuition about vectors.
$endgroup$
– Asaf Karagila♦
14 hours ago
3
3
$begingroup$
@AsafKaragilan How should one think about vectors then?
$endgroup$
– Arlene
11 hours ago
$begingroup$
@AsafKaragilan How should one think about vectors then?
$endgroup$
– Arlene
11 hours ago
1
1
$begingroup$
@Arlene vectors are just a convenient way to store numbers. They are useful in 2d/3d geometry to think about them as direction & magnitude, but a 0 length vector in such geometry HAS no direction, because in order for the direction to be meaningful, it must have a distance.
$endgroup$
– UKMonkey
9 hours ago
$begingroup$
@Arlene vectors are just a convenient way to store numbers. They are useful in 2d/3d geometry to think about them as direction & magnitude, but a 0 length vector in such geometry HAS no direction, because in order for the direction to be meaningful, it must have a distance.
$endgroup$
– UKMonkey
9 hours ago
|
show 1 more comment
3 Answers
3
active
oldest
votes
$begingroup$
The zero vector is a vector, i.e. a member of whatever vector space is under consideration. It has the property that adding it to any vector $bf v$ in the vector space leaves $bf v$ unchanged.
The zero scalar is a scalar, i.e. a member of the field that is part of the definition of the vector space (usually the real or complex numbers in an elementary linear algebra course). It has the property that multiplying
any vector $bf v$ by it gives the zero vector of the second vector space.
The zero operator is a linear operator, i.e. a linear map from a vector space to a vector space (possibly the same one). It has the property that it maps any member of the first vector space to the zero vector in the second vector space.
The zero functional is a linear functional, i.e. a linear map from a vector space to the scalars. It has the property that it maps any member of the vector space to the zero scalar.
$endgroup$
add a comment |
$begingroup$
In an algebraic context where there is a notion of addition, $0$ is the element such that
$$
x + 0 = x
$$
for every $x$.
If the context is the real numbers, then $0$ is just a number. If the context is the Euclidean coordinate plane, $0$ is the vector $(0,0)$. If the context is the set of real valued functions on the unit interval then $0$ is the function whose value at every point is $0$. If the context is the set of linear operators from one vector space to another then $0$ is the operator whose value at every point of the domain is the $0$ vector in the codomain.
So the meaning of the symbol "$0$" changes depending on the context. That's potentially confusing (which is why you are asking the question.) The advantage in using the same symbol in these different contexts is that it's easy to associate that symbol with its behavior: it's the additive identity.
$endgroup$
1
$begingroup$
The set of linear operators between two vector spaces $X$, $Y$ is itself a vector space: $Z := {f: X to Y | f text{ linear function}}$, where addition and scalar multiplication are pointwise. Now the zero vector of that $Z$ vector space is the zero operator. So in some sense writing $0$ for the zero operator in fact denotes the $0$ in a specific vector space.
$endgroup$
– ComFreek
19 hours ago
$begingroup$
@ComFreek In fact here all of the OP's examples are vector spaces. The convention extends to modules over a ring ( en.wikipedia.org/wiki/Module_(mathematics) ) and to some even more general abstractions.
$endgroup$
– Ethan Bolker
13 hours ago
add a comment |
$begingroup$
A pedantic answer would be that those differences are not defined, since subtraction requires two operands of the same type, and those values all have different types. As a matter of good habit, one does not even start considering values in algebra without first specifying the basic set from which they are taken, in other words their type. In linear algebra the two most basic types are the field of scalars (often denoted by $F$ or $K$) and some space of vectors over that field (often denoted by $V$ or some similar letter), and there are various mechanisms to form new basic sets, such as Cartesian products, matrices, sets of linear functions $Vto W$ where $V$ and $W$ are vector spaces over$~F$ (possibly the same one). All these basic sets are assumed to be disjoint, so that any given value belongs to at most one of them, which set then gives the type of that value. Usually these sets come equipped with a set of operations; these can only be applied to elements of that set. To complicate the description (but simplify life) operations on different sets often carry the same name, for instance the symbol '$+$' can be used for the addition of scalars, vectors, matrices, linear maps and many more things; in computer science this is called operator overloading. The reader is supposed to resolve the ambiguity by checking the types of the arguments given to the operators.
A special complication occurs for the symbol $0$ (and to some extent for other symbols like $mathbf I$), which is overloaded in the same sense: it refers to different special values in each type (in linear algebra there is hardly any type that does not have its own value $0$). In this sense it can be view as an overloaded operator with no (i.e., $0inBbb N$) arguments. This poses an obvious difficulty with deducing the intended meaning from the types of the arguments, so instead for '$0$' it must be in some other manner be clear from the context. If you see $0+x$ in a formula, for instance, you may assume that this is the zero value of the same type as $x$, but in some cases the context can be really ambiguous; in that case it is the task of the author to make clear what type of "zero" is meant. But in no case should one pretend that the zero scalar, the zero vector, a zero matrix, a zero linear map are the same thing; the distinction goes even further, as the zero vectors of unrelated vector spaces, as well as zero matrices of different dimensions, are not assumed to be the same thing, even though they all share the same name. (In practice there is not much difficulty in living with this theoretic ambiguity, and one might even maintain that writing $0$ means indicating that the expression at that place is endowed with the quality of "zeroness", which usually completely governs how it behaves.)
$endgroup$
$begingroup$
I like to see it as a dependently typed one-argument function whose argument is inferred from the context most of the times: $0: Pi_{v: vectorspace} textrm{dom}(v)$. It accepts a vector space and returns its zero element as an element of its domain. Of course, you can generalize up to monoids. Sometimes people write $0_V$ and $0_W$ exactly to help disambiguate or ease the human reading process.
$endgroup$
– ComFreek
13 hours ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Arlene is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3117052%2fwhat-is-the-difference-between-a-zero-operator-zero-function-zero-scalar-and%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The zero vector is a vector, i.e. a member of whatever vector space is under consideration. It has the property that adding it to any vector $bf v$ in the vector space leaves $bf v$ unchanged.
The zero scalar is a scalar, i.e. a member of the field that is part of the definition of the vector space (usually the real or complex numbers in an elementary linear algebra course). It has the property that multiplying
any vector $bf v$ by it gives the zero vector of the second vector space.
The zero operator is a linear operator, i.e. a linear map from a vector space to a vector space (possibly the same one). It has the property that it maps any member of the first vector space to the zero vector in the second vector space.
The zero functional is a linear functional, i.e. a linear map from a vector space to the scalars. It has the property that it maps any member of the vector space to the zero scalar.
$endgroup$
add a comment |
$begingroup$
The zero vector is a vector, i.e. a member of whatever vector space is under consideration. It has the property that adding it to any vector $bf v$ in the vector space leaves $bf v$ unchanged.
The zero scalar is a scalar, i.e. a member of the field that is part of the definition of the vector space (usually the real or complex numbers in an elementary linear algebra course). It has the property that multiplying
any vector $bf v$ by it gives the zero vector of the second vector space.
The zero operator is a linear operator, i.e. a linear map from a vector space to a vector space (possibly the same one). It has the property that it maps any member of the first vector space to the zero vector in the second vector space.
The zero functional is a linear functional, i.e. a linear map from a vector space to the scalars. It has the property that it maps any member of the vector space to the zero scalar.
$endgroup$
add a comment |
$begingroup$
The zero vector is a vector, i.e. a member of whatever vector space is under consideration. It has the property that adding it to any vector $bf v$ in the vector space leaves $bf v$ unchanged.
The zero scalar is a scalar, i.e. a member of the field that is part of the definition of the vector space (usually the real or complex numbers in an elementary linear algebra course). It has the property that multiplying
any vector $bf v$ by it gives the zero vector of the second vector space.
The zero operator is a linear operator, i.e. a linear map from a vector space to a vector space (possibly the same one). It has the property that it maps any member of the first vector space to the zero vector in the second vector space.
The zero functional is a linear functional, i.e. a linear map from a vector space to the scalars. It has the property that it maps any member of the vector space to the zero scalar.
$endgroup$
The zero vector is a vector, i.e. a member of whatever vector space is under consideration. It has the property that adding it to any vector $bf v$ in the vector space leaves $bf v$ unchanged.
The zero scalar is a scalar, i.e. a member of the field that is part of the definition of the vector space (usually the real or complex numbers in an elementary linear algebra course). It has the property that multiplying
any vector $bf v$ by it gives the zero vector of the second vector space.
The zero operator is a linear operator, i.e. a linear map from a vector space to a vector space (possibly the same one). It has the property that it maps any member of the first vector space to the zero vector in the second vector space.
The zero functional is a linear functional, i.e. a linear map from a vector space to the scalars. It has the property that it maps any member of the vector space to the zero scalar.
edited yesterday
answered yesterday
Robert IsraelRobert Israel
324k23214468
324k23214468
add a comment |
add a comment |
$begingroup$
In an algebraic context where there is a notion of addition, $0$ is the element such that
$$
x + 0 = x
$$
for every $x$.
If the context is the real numbers, then $0$ is just a number. If the context is the Euclidean coordinate plane, $0$ is the vector $(0,0)$. If the context is the set of real valued functions on the unit interval then $0$ is the function whose value at every point is $0$. If the context is the set of linear operators from one vector space to another then $0$ is the operator whose value at every point of the domain is the $0$ vector in the codomain.
So the meaning of the symbol "$0$" changes depending on the context. That's potentially confusing (which is why you are asking the question.) The advantage in using the same symbol in these different contexts is that it's easy to associate that symbol with its behavior: it's the additive identity.
$endgroup$
1
$begingroup$
The set of linear operators between two vector spaces $X$, $Y$ is itself a vector space: $Z := {f: X to Y | f text{ linear function}}$, where addition and scalar multiplication are pointwise. Now the zero vector of that $Z$ vector space is the zero operator. So in some sense writing $0$ for the zero operator in fact denotes the $0$ in a specific vector space.
$endgroup$
– ComFreek
19 hours ago
$begingroup$
@ComFreek In fact here all of the OP's examples are vector spaces. The convention extends to modules over a ring ( en.wikipedia.org/wiki/Module_(mathematics) ) and to some even more general abstractions.
$endgroup$
– Ethan Bolker
13 hours ago
add a comment |
$begingroup$
In an algebraic context where there is a notion of addition, $0$ is the element such that
$$
x + 0 = x
$$
for every $x$.
If the context is the real numbers, then $0$ is just a number. If the context is the Euclidean coordinate plane, $0$ is the vector $(0,0)$. If the context is the set of real valued functions on the unit interval then $0$ is the function whose value at every point is $0$. If the context is the set of linear operators from one vector space to another then $0$ is the operator whose value at every point of the domain is the $0$ vector in the codomain.
So the meaning of the symbol "$0$" changes depending on the context. That's potentially confusing (which is why you are asking the question.) The advantage in using the same symbol in these different contexts is that it's easy to associate that symbol with its behavior: it's the additive identity.
$endgroup$
1
$begingroup$
The set of linear operators between two vector spaces $X$, $Y$ is itself a vector space: $Z := {f: X to Y | f text{ linear function}}$, where addition and scalar multiplication are pointwise. Now the zero vector of that $Z$ vector space is the zero operator. So in some sense writing $0$ for the zero operator in fact denotes the $0$ in a specific vector space.
$endgroup$
– ComFreek
19 hours ago
$begingroup$
@ComFreek In fact here all of the OP's examples are vector spaces. The convention extends to modules over a ring ( en.wikipedia.org/wiki/Module_(mathematics) ) and to some even more general abstractions.
$endgroup$
– Ethan Bolker
13 hours ago
add a comment |
$begingroup$
In an algebraic context where there is a notion of addition, $0$ is the element such that
$$
x + 0 = x
$$
for every $x$.
If the context is the real numbers, then $0$ is just a number. If the context is the Euclidean coordinate plane, $0$ is the vector $(0,0)$. If the context is the set of real valued functions on the unit interval then $0$ is the function whose value at every point is $0$. If the context is the set of linear operators from one vector space to another then $0$ is the operator whose value at every point of the domain is the $0$ vector in the codomain.
So the meaning of the symbol "$0$" changes depending on the context. That's potentially confusing (which is why you are asking the question.) The advantage in using the same symbol in these different contexts is that it's easy to associate that symbol with its behavior: it's the additive identity.
$endgroup$
In an algebraic context where there is a notion of addition, $0$ is the element such that
$$
x + 0 = x
$$
for every $x$.
If the context is the real numbers, then $0$ is just a number. If the context is the Euclidean coordinate plane, $0$ is the vector $(0,0)$. If the context is the set of real valued functions on the unit interval then $0$ is the function whose value at every point is $0$. If the context is the set of linear operators from one vector space to another then $0$ is the operator whose value at every point of the domain is the $0$ vector in the codomain.
So the meaning of the symbol "$0$" changes depending on the context. That's potentially confusing (which is why you are asking the question.) The advantage in using the same symbol in these different contexts is that it's easy to associate that symbol with its behavior: it's the additive identity.
answered yesterday
Ethan BolkerEthan Bolker
43.5k551116
43.5k551116
1
$begingroup$
The set of linear operators between two vector spaces $X$, $Y$ is itself a vector space: $Z := {f: X to Y | f text{ linear function}}$, where addition and scalar multiplication are pointwise. Now the zero vector of that $Z$ vector space is the zero operator. So in some sense writing $0$ for the zero operator in fact denotes the $0$ in a specific vector space.
$endgroup$
– ComFreek
19 hours ago
$begingroup$
@ComFreek In fact here all of the OP's examples are vector spaces. The convention extends to modules over a ring ( en.wikipedia.org/wiki/Module_(mathematics) ) and to some even more general abstractions.
$endgroup$
– Ethan Bolker
13 hours ago
add a comment |
1
$begingroup$
The set of linear operators between two vector spaces $X$, $Y$ is itself a vector space: $Z := {f: X to Y | f text{ linear function}}$, where addition and scalar multiplication are pointwise. Now the zero vector of that $Z$ vector space is the zero operator. So in some sense writing $0$ for the zero operator in fact denotes the $0$ in a specific vector space.
$endgroup$
– ComFreek
19 hours ago
$begingroup$
@ComFreek In fact here all of the OP's examples are vector spaces. The convention extends to modules over a ring ( en.wikipedia.org/wiki/Module_(mathematics) ) and to some even more general abstractions.
$endgroup$
– Ethan Bolker
13 hours ago
1
1
$begingroup$
The set of linear operators between two vector spaces $X$, $Y$ is itself a vector space: $Z := {f: X to Y | f text{ linear function}}$, where addition and scalar multiplication are pointwise. Now the zero vector of that $Z$ vector space is the zero operator. So in some sense writing $0$ for the zero operator in fact denotes the $0$ in a specific vector space.
$endgroup$
– ComFreek
19 hours ago
$begingroup$
The set of linear operators between two vector spaces $X$, $Y$ is itself a vector space: $Z := {f: X to Y | f text{ linear function}}$, where addition and scalar multiplication are pointwise. Now the zero vector of that $Z$ vector space is the zero operator. So in some sense writing $0$ for the zero operator in fact denotes the $0$ in a specific vector space.
$endgroup$
– ComFreek
19 hours ago
$begingroup$
@ComFreek In fact here all of the OP's examples are vector spaces. The convention extends to modules over a ring ( en.wikipedia.org/wiki/Module_(mathematics) ) and to some even more general abstractions.
$endgroup$
– Ethan Bolker
13 hours ago
$begingroup$
@ComFreek In fact here all of the OP's examples are vector spaces. The convention extends to modules over a ring ( en.wikipedia.org/wiki/Module_(mathematics) ) and to some even more general abstractions.
$endgroup$
– Ethan Bolker
13 hours ago
add a comment |
$begingroup$
A pedantic answer would be that those differences are not defined, since subtraction requires two operands of the same type, and those values all have different types. As a matter of good habit, one does not even start considering values in algebra without first specifying the basic set from which they are taken, in other words their type. In linear algebra the two most basic types are the field of scalars (often denoted by $F$ or $K$) and some space of vectors over that field (often denoted by $V$ or some similar letter), and there are various mechanisms to form new basic sets, such as Cartesian products, matrices, sets of linear functions $Vto W$ where $V$ and $W$ are vector spaces over$~F$ (possibly the same one). All these basic sets are assumed to be disjoint, so that any given value belongs to at most one of them, which set then gives the type of that value. Usually these sets come equipped with a set of operations; these can only be applied to elements of that set. To complicate the description (but simplify life) operations on different sets often carry the same name, for instance the symbol '$+$' can be used for the addition of scalars, vectors, matrices, linear maps and many more things; in computer science this is called operator overloading. The reader is supposed to resolve the ambiguity by checking the types of the arguments given to the operators.
A special complication occurs for the symbol $0$ (and to some extent for other symbols like $mathbf I$), which is overloaded in the same sense: it refers to different special values in each type (in linear algebra there is hardly any type that does not have its own value $0$). In this sense it can be view as an overloaded operator with no (i.e., $0inBbb N$) arguments. This poses an obvious difficulty with deducing the intended meaning from the types of the arguments, so instead for '$0$' it must be in some other manner be clear from the context. If you see $0+x$ in a formula, for instance, you may assume that this is the zero value of the same type as $x$, but in some cases the context can be really ambiguous; in that case it is the task of the author to make clear what type of "zero" is meant. But in no case should one pretend that the zero scalar, the zero vector, a zero matrix, a zero linear map are the same thing; the distinction goes even further, as the zero vectors of unrelated vector spaces, as well as zero matrices of different dimensions, are not assumed to be the same thing, even though they all share the same name. (In practice there is not much difficulty in living with this theoretic ambiguity, and one might even maintain that writing $0$ means indicating that the expression at that place is endowed with the quality of "zeroness", which usually completely governs how it behaves.)
$endgroup$
$begingroup$
I like to see it as a dependently typed one-argument function whose argument is inferred from the context most of the times: $0: Pi_{v: vectorspace} textrm{dom}(v)$. It accepts a vector space and returns its zero element as an element of its domain. Of course, you can generalize up to monoids. Sometimes people write $0_V$ and $0_W$ exactly to help disambiguate or ease the human reading process.
$endgroup$
– ComFreek
13 hours ago
add a comment |
$begingroup$
A pedantic answer would be that those differences are not defined, since subtraction requires two operands of the same type, and those values all have different types. As a matter of good habit, one does not even start considering values in algebra without first specifying the basic set from which they are taken, in other words their type. In linear algebra the two most basic types are the field of scalars (often denoted by $F$ or $K$) and some space of vectors over that field (often denoted by $V$ or some similar letter), and there are various mechanisms to form new basic sets, such as Cartesian products, matrices, sets of linear functions $Vto W$ where $V$ and $W$ are vector spaces over$~F$ (possibly the same one). All these basic sets are assumed to be disjoint, so that any given value belongs to at most one of them, which set then gives the type of that value. Usually these sets come equipped with a set of operations; these can only be applied to elements of that set. To complicate the description (but simplify life) operations on different sets often carry the same name, for instance the symbol '$+$' can be used for the addition of scalars, vectors, matrices, linear maps and many more things; in computer science this is called operator overloading. The reader is supposed to resolve the ambiguity by checking the types of the arguments given to the operators.
A special complication occurs for the symbol $0$ (and to some extent for other symbols like $mathbf I$), which is overloaded in the same sense: it refers to different special values in each type (in linear algebra there is hardly any type that does not have its own value $0$). In this sense it can be view as an overloaded operator with no (i.e., $0inBbb N$) arguments. This poses an obvious difficulty with deducing the intended meaning from the types of the arguments, so instead for '$0$' it must be in some other manner be clear from the context. If you see $0+x$ in a formula, for instance, you may assume that this is the zero value of the same type as $x$, but in some cases the context can be really ambiguous; in that case it is the task of the author to make clear what type of "zero" is meant. But in no case should one pretend that the zero scalar, the zero vector, a zero matrix, a zero linear map are the same thing; the distinction goes even further, as the zero vectors of unrelated vector spaces, as well as zero matrices of different dimensions, are not assumed to be the same thing, even though they all share the same name. (In practice there is not much difficulty in living with this theoretic ambiguity, and one might even maintain that writing $0$ means indicating that the expression at that place is endowed with the quality of "zeroness", which usually completely governs how it behaves.)
$endgroup$
$begingroup$
I like to see it as a dependently typed one-argument function whose argument is inferred from the context most of the times: $0: Pi_{v: vectorspace} textrm{dom}(v)$. It accepts a vector space and returns its zero element as an element of its domain. Of course, you can generalize up to monoids. Sometimes people write $0_V$ and $0_W$ exactly to help disambiguate or ease the human reading process.
$endgroup$
– ComFreek
13 hours ago
add a comment |
$begingroup$
A pedantic answer would be that those differences are not defined, since subtraction requires two operands of the same type, and those values all have different types. As a matter of good habit, one does not even start considering values in algebra without first specifying the basic set from which they are taken, in other words their type. In linear algebra the two most basic types are the field of scalars (often denoted by $F$ or $K$) and some space of vectors over that field (often denoted by $V$ or some similar letter), and there are various mechanisms to form new basic sets, such as Cartesian products, matrices, sets of linear functions $Vto W$ where $V$ and $W$ are vector spaces over$~F$ (possibly the same one). All these basic sets are assumed to be disjoint, so that any given value belongs to at most one of them, which set then gives the type of that value. Usually these sets come equipped with a set of operations; these can only be applied to elements of that set. To complicate the description (but simplify life) operations on different sets often carry the same name, for instance the symbol '$+$' can be used for the addition of scalars, vectors, matrices, linear maps and many more things; in computer science this is called operator overloading. The reader is supposed to resolve the ambiguity by checking the types of the arguments given to the operators.
A special complication occurs for the symbol $0$ (and to some extent for other symbols like $mathbf I$), which is overloaded in the same sense: it refers to different special values in each type (in linear algebra there is hardly any type that does not have its own value $0$). In this sense it can be view as an overloaded operator with no (i.e., $0inBbb N$) arguments. This poses an obvious difficulty with deducing the intended meaning from the types of the arguments, so instead for '$0$' it must be in some other manner be clear from the context. If you see $0+x$ in a formula, for instance, you may assume that this is the zero value of the same type as $x$, but in some cases the context can be really ambiguous; in that case it is the task of the author to make clear what type of "zero" is meant. But in no case should one pretend that the zero scalar, the zero vector, a zero matrix, a zero linear map are the same thing; the distinction goes even further, as the zero vectors of unrelated vector spaces, as well as zero matrices of different dimensions, are not assumed to be the same thing, even though they all share the same name. (In practice there is not much difficulty in living with this theoretic ambiguity, and one might even maintain that writing $0$ means indicating that the expression at that place is endowed with the quality of "zeroness", which usually completely governs how it behaves.)
$endgroup$
A pedantic answer would be that those differences are not defined, since subtraction requires two operands of the same type, and those values all have different types. As a matter of good habit, one does not even start considering values in algebra without first specifying the basic set from which they are taken, in other words their type. In linear algebra the two most basic types are the field of scalars (often denoted by $F$ or $K$) and some space of vectors over that field (often denoted by $V$ or some similar letter), and there are various mechanisms to form new basic sets, such as Cartesian products, matrices, sets of linear functions $Vto W$ where $V$ and $W$ are vector spaces over$~F$ (possibly the same one). All these basic sets are assumed to be disjoint, so that any given value belongs to at most one of them, which set then gives the type of that value. Usually these sets come equipped with a set of operations; these can only be applied to elements of that set. To complicate the description (but simplify life) operations on different sets often carry the same name, for instance the symbol '$+$' can be used for the addition of scalars, vectors, matrices, linear maps and many more things; in computer science this is called operator overloading. The reader is supposed to resolve the ambiguity by checking the types of the arguments given to the operators.
A special complication occurs for the symbol $0$ (and to some extent for other symbols like $mathbf I$), which is overloaded in the same sense: it refers to different special values in each type (in linear algebra there is hardly any type that does not have its own value $0$). In this sense it can be view as an overloaded operator with no (i.e., $0inBbb N$) arguments. This poses an obvious difficulty with deducing the intended meaning from the types of the arguments, so instead for '$0$' it must be in some other manner be clear from the context. If you see $0+x$ in a formula, for instance, you may assume that this is the zero value of the same type as $x$, but in some cases the context can be really ambiguous; in that case it is the task of the author to make clear what type of "zero" is meant. But in no case should one pretend that the zero scalar, the zero vector, a zero matrix, a zero linear map are the same thing; the distinction goes even further, as the zero vectors of unrelated vector spaces, as well as zero matrices of different dimensions, are not assumed to be the same thing, even though they all share the same name. (In practice there is not much difficulty in living with this theoretic ambiguity, and one might even maintain that writing $0$ means indicating that the expression at that place is endowed with the quality of "zeroness", which usually completely governs how it behaves.)
edited 12 hours ago
answered 15 hours ago
Marc van LeeuwenMarc van Leeuwen
87.4k5109224
87.4k5109224
$begingroup$
I like to see it as a dependently typed one-argument function whose argument is inferred from the context most of the times: $0: Pi_{v: vectorspace} textrm{dom}(v)$. It accepts a vector space and returns its zero element as an element of its domain. Of course, you can generalize up to monoids. Sometimes people write $0_V$ and $0_W$ exactly to help disambiguate or ease the human reading process.
$endgroup$
– ComFreek
13 hours ago
add a comment |
$begingroup$
I like to see it as a dependently typed one-argument function whose argument is inferred from the context most of the times: $0: Pi_{v: vectorspace} textrm{dom}(v)$. It accepts a vector space and returns its zero element as an element of its domain. Of course, you can generalize up to monoids. Sometimes people write $0_V$ and $0_W$ exactly to help disambiguate or ease the human reading process.
$endgroup$
– ComFreek
13 hours ago
$begingroup$
I like to see it as a dependently typed one-argument function whose argument is inferred from the context most of the times: $0: Pi_{v: vectorspace} textrm{dom}(v)$. It accepts a vector space and returns its zero element as an element of its domain. Of course, you can generalize up to monoids. Sometimes people write $0_V$ and $0_W$ exactly to help disambiguate or ease the human reading process.
$endgroup$
– ComFreek
13 hours ago
$begingroup$
I like to see it as a dependently typed one-argument function whose argument is inferred from the context most of the times: $0: Pi_{v: vectorspace} textrm{dom}(v)$. It accepts a vector space and returns its zero element as an element of its domain. Of course, you can generalize up to monoids. Sometimes people write $0_V$ and $0_W$ exactly to help disambiguate or ease the human reading process.
$endgroup$
– ComFreek
13 hours ago
add a comment |
Arlene is a new contributor. Be nice, and check out our Code of Conduct.
Arlene is a new contributor. Be nice, and check out our Code of Conduct.
Arlene is a new contributor. Be nice, and check out our Code of Conduct.
Arlene is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3117052%2fwhat-is-the-difference-between-a-zero-operator-zero-function-zero-scalar-and%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
$begingroup$
So two vectors of length zero and different directions are... equal? Thinking of vectors in terms of length and direction is a very misguided idea which doesn't work in mathematics.
$endgroup$
– Asaf Karagila♦
18 hours ago
2
$begingroup$
@Asaf it basically works in an inner product space as long as you make sure that you only have one vector of length zero (whether it has all directions or no direction probably doesn't matter)
$endgroup$
– Mark S.
15 hours ago
3
$begingroup$
@Mark: I know that. My point is that "vector is direction and length" is a bad intuition about vectors.
$endgroup$
– Asaf Karagila♦
14 hours ago
3
$begingroup$
@AsafKaragilan How should one think about vectors then?
$endgroup$
– Arlene
11 hours ago
1
$begingroup$
@Arlene vectors are just a convenient way to store numbers. They are useful in 2d/3d geometry to think about them as direction & magnitude, but a 0 length vector in such geometry HAS no direction, because in order for the direction to be meaningful, it must have a distance.
$endgroup$
– UKMonkey
9 hours ago