일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |
31 |
- STAGE
- git push
- GitHub 설치
- 백준
- 함수
- local repository 생성
- git config global
- 15596
- 파이썬
- Reset
- 정규식
- git commit
- Baekjoon
- regExr
- commit
- ADD
- 1
- amend
- boostcamp #aI tech #주간리뷰
- remote repository 생성
- Push
- github
- 두수비교하기
- python
- Restore
- 1330
- git
- 수정사항업데이트
- Today
- Total
Très bien
PyTorch Basics 본문
PyTorch의 DataType
대부분 C언어에서 파생된 데이터 유형으로 NumPy와 유사한 형태이며, 추가적으로 cuda type의 GPU tensor가 지원된다.
Torch defines 10 tensor types with CPU and GPU variants which are as follows:
Data typed | dtype | typeCPU tensor | GPU tensor |
32-bit floating point | torch.float32 or torch.float | torch.FloatTensor | torch.cuda.FloatTensor |
64-bit floating point | torch.float64 or torch.double | torch.DoubleTensor | torch.cuda.DoubleTensor |
16-bit floating point 1 | torch.float16 or torch.half | torch.HalfTensor | torch.cuda.HalfTensor |
16-bit floating point 2 | torch.bfloat16 | torch.BFloat16Tensor | torch.cuda.BFloat16Tensor |
32-bit complex | torch.complex32 | ||
64-bit complex | torch.complex64 | ||
128-bit complex | torch.complex128 or torch.cdouble | ||
8-bit integer (unsigned) | torch.uint8 | torch.ByteTensor | torch.cuda.ByteTensor |
8-bit integer (signed) | torch.int8 | torch.CharTensor | torch.cuda.CharTensor |
16-bit integer (signed) | torch.int16 or torch.short | torch.ShortTensor | torch.cuda.ShortTensor |
32-bit integer (signed) | torch.int32 or torch.int | torch.IntTensor | torch.cuda.IntTensor |
64-bit integer (signed) | torch.int64 or torch.long | torch.LongTensor | torch.cuda.LongTensor |
Boolean | torch.bool | torch.BoolTensor | torch.cuda.BoolTensor |
quantized 8-bit integer (unsigned) | torch.quint8 | torch.ByteTensor | / |
quantized 8-bit integer (signed) | torch.qint8 | torch.CharTensor | / |
quantized 32-bit integer (signed) | torch.qfint32 | torch.IntTensor | / |
quantized 4-bit integer (unsigned) 3 | torch.quint4x2 | torch.ByteTensor | / |
Sometimes referred to as binary16: uses 1 sign, 5 exponent, and 10 significand bits. Useful when precision is important at the expense of range.
Sometimes referred to as Brain Floating Point: uses 1 sign, 8 exponent, and 7 significand bits. Useful when range is important, since it has the same number of exponent bits as float32
quantized 4-bit integer is stored as a 8-bit signed integer. Currently it’s only supported in EmbeddingBag operator.
torch.Tensor is an alias for the default tensor type (torch.FloatTensor).
Torch defines 10 tensor types with CPU and GPU variants which are as follows:
Sometimes referred to as binary16: uses 1 sign, 5 exponent, and 10 significand bits. Useful when precision is important at the expense of range.
Reference : https://pytorch.org/docs/stable/tensors.html
view() vs. reshpae()
view()와 reshape()은 둘 다 tensor의 모양을 변경하기 위해 사용할 수 있다.
toch.view( )는 새로운 모양의 tensor를 반환하며, 원본 tensor와 data memory를 공유한다. 따라서, 원본 tensor의 값이 변경된다면, viewed되는 tensor의 해당 값도 변경된다.
반면, torch.reshpae()은 원본 tensor의 복사본 또는 view를 반환한다. 무슨 말이냐하면, copy값을 받을지 view값을 받을지 알 수 없다고 한다. 만약, 원본 input과 동일한 저장이 필요한 경우에는 clone()을 이용하거나 view()를 이용해야 한다고 한다.
Reference : https://sanghyu.tistory.com/3
squeeze() and unsqueeze()
Returns a tensor with the same data and number of elements as input, but with the specified shape. When possible, the returned tensor will be a view of input. Otherwise, it will be a copy. Contiguous inputs and inputs with compatible strides can be reshaped without copying, but you should not depend on the copying vs. viewing behavior.
Reference : python - Pytorch squeeze and unsqueeze - Stack Overflow
PyTorch의 곱셈 연산
- .dot( ) : 벡터곱
- .mm( ) : 행렬곱
- .matmul() : 행렬곱 (braodcasting 지원)